00:00:00.001  Started by upstream project "autotest-nightly-lts" build number 2469
00:00:00.001  originally caused by:
00:00:00.001   Started by upstream project "nightly-trigger" build number 3730
00:00:00.001   originally caused by:
00:00:00.001    Started by timer
00:00:00.154  Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy
00:00:00.155  The recommended git tool is: git
00:00:00.155  using credential 00000000-0000-0000-0000-000000000002
00:00:00.157   > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10
00:00:00.201  Fetching changes from the remote Git repository
00:00:00.203   > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10
00:00:00.251  Using shallow fetch with depth 1
00:00:00.251  Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool
00:00:00.251   > git --version # timeout=10
00:00:00.279   > git --version # 'git version 2.39.2'
00:00:00.279  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:00.291  Setting http proxy: proxy-dmz.intel.com:911
00:00:00.291   > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5
00:00:05.960   > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10
00:00:05.973   > git rev-parse FETCH_HEAD^{commit} # timeout=10
00:00:05.984  Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD)
00:00:05.984   > git config core.sparsecheckout # timeout=10
00:00:05.994   > git read-tree -mu HEAD # timeout=10
00:00:06.009   > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5
00:00:06.035  Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag"
00:00:06.035   > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10
00:00:06.158  [Pipeline] Start of Pipeline
00:00:06.168  [Pipeline] library
00:00:06.170  Loading library shm_lib@master
00:00:06.170  Library shm_lib@master is cached. Copying from home.
00:00:06.181  [Pipeline] node
00:00:06.193  Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest
00:00:06.194  [Pipeline] {
00:00:06.201  [Pipeline] catchError
00:00:06.202  [Pipeline] {
00:00:06.210  [Pipeline] wrap
00:00:06.215  [Pipeline] {
00:00:06.220  [Pipeline] stage
00:00:06.221  [Pipeline] { (Prologue)
00:00:06.232  [Pipeline] echo
00:00:06.233  Node: VM-host-SM0
00:00:06.237  [Pipeline] cleanWs
00:00:06.246  [WS-CLEANUP] Deleting project workspace...
00:00:06.246  [WS-CLEANUP] Deferred wipeout is used...
00:00:06.252  [WS-CLEANUP] done
00:00:06.429  [Pipeline] setCustomBuildProperty
00:00:06.518  [Pipeline] httpRequest
00:00:06.936  [Pipeline] echo
00:00:06.937  Sorcerer 10.211.164.20 is alive
00:00:06.947  [Pipeline] retry
00:00:06.949  [Pipeline] {
00:00:06.962  [Pipeline] httpRequest
00:00:06.966  HttpMethod: GET
00:00:06.967  URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:06.967  Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:06.979  Response Code: HTTP/1.1 200 OK
00:00:06.979  Success: Status code 200 is in the accepted range: 200,404
00:00:06.980  Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:08.700  [Pipeline] }
00:00:08.716  [Pipeline] // retry
00:00:08.722  [Pipeline] sh
00:00:09.002  + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:09.015  [Pipeline] httpRequest
00:00:09.391  [Pipeline] echo
00:00:09.393  Sorcerer 10.211.164.20 is alive
00:00:09.402  [Pipeline] retry
00:00:09.405  [Pipeline] {
00:00:09.419  [Pipeline] httpRequest
00:00:09.423  HttpMethod: GET
00:00:09.424  URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz
00:00:09.425  Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz
00:00:09.438  Response Code: HTTP/1.1 200 OK
00:00:09.439  Success: Status code 200 is in the accepted range: 200,404
00:00:09.440  Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz
00:01:12.562  [Pipeline] }
00:01:12.580  [Pipeline] // retry
00:01:12.588  [Pipeline] sh
00:01:12.873  + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz
00:01:15.420  [Pipeline] sh
00:01:15.701  + git -C spdk log --oneline -n5
00:01:15.701  c13c99a5e test: Various fixes for Fedora40
00:01:15.701  726a04d70 test/nvmf: adjust timeout for bigger nvmes
00:01:15.701  61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11
00:01:15.701  7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched
00:01:15.701  ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges
00:01:15.719  [Pipeline] writeFile
00:01:15.733  [Pipeline] sh
00:01:16.015  + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh
00:01:16.027  [Pipeline] sh
00:01:16.309  + cat autorun-spdk.conf
00:01:16.309  SPDK_RUN_FUNCTIONAL_TEST=1
00:01:16.309  SPDK_TEST_NVMF=1
00:01:16.309  SPDK_TEST_NVMF_TRANSPORT=tcp
00:01:16.309  SPDK_TEST_VFIOUSER=1
00:01:16.309  SPDK_TEST_USDT=1
00:01:16.309  SPDK_RUN_UBSAN=1
00:01:16.309  SPDK_TEST_NVMF_MDNS=1
00:01:16.309  NET_TYPE=virt
00:01:16.309  SPDK_JSONRPC_GO_CLIENT=1
00:01:16.309  SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:01:16.316  RUN_NIGHTLY=1
00:01:16.318  [Pipeline] }
00:01:16.332  [Pipeline] // stage
00:01:16.346  [Pipeline] stage
00:01:16.348  [Pipeline] { (Run VM)
00:01:16.361  [Pipeline] sh
00:01:16.642  + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh
00:01:16.642  + echo 'Start stage prepare_nvme.sh'
00:01:16.642  Start stage prepare_nvme.sh
00:01:16.642  + [[ -n 6 ]]
00:01:16.642  + disk_prefix=ex6
00:01:16.642  + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]]
00:01:16.642  + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]]
00:01:16.642  + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf
00:01:16.642  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:01:16.642  ++ SPDK_TEST_NVMF=1
00:01:16.642  ++ SPDK_TEST_NVMF_TRANSPORT=tcp
00:01:16.642  ++ SPDK_TEST_VFIOUSER=1
00:01:16.642  ++ SPDK_TEST_USDT=1
00:01:16.642  ++ SPDK_RUN_UBSAN=1
00:01:16.642  ++ SPDK_TEST_NVMF_MDNS=1
00:01:16.642  ++ NET_TYPE=virt
00:01:16.642  ++ SPDK_JSONRPC_GO_CLIENT=1
00:01:16.642  ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:01:16.642  ++ RUN_NIGHTLY=1
00:01:16.642  + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest
00:01:16.642  + nvme_files=()
00:01:16.642  + declare -A nvme_files
00:01:16.642  + backend_dir=/var/lib/libvirt/images/backends
00:01:16.642  + nvme_files['nvme.img']=5G
00:01:16.642  + nvme_files['nvme-cmb.img']=5G
00:01:16.642  + nvme_files['nvme-multi0.img']=4G
00:01:16.642  + nvme_files['nvme-multi1.img']=4G
00:01:16.642  + nvme_files['nvme-multi2.img']=4G
00:01:16.642  + nvme_files['nvme-openstack.img']=8G
00:01:16.642  + nvme_files['nvme-zns.img']=5G
00:01:16.642  + ((  SPDK_TEST_NVME_PMR == 1  ))
00:01:16.642  + ((  SPDK_TEST_FTL == 1  ))
00:01:16.642  + ((  SPDK_TEST_NVME_FDP == 1  ))
00:01:16.642  + [[ ! -d /var/lib/libvirt/images/backends ]]
00:01:16.642  + for nvme in "${!nvme_files[@]}"
00:01:16.642  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G
00:01:16.642  Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc
00:01:16.642  + for nvme in "${!nvme_files[@]}"
00:01:16.642  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G
00:01:16.642  Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc
00:01:16.642  + for nvme in "${!nvme_files[@]}"
00:01:16.642  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G
00:01:16.642  Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc
00:01:16.642  + for nvme in "${!nvme_files[@]}"
00:01:16.642  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G
00:01:16.642  Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc
00:01:16.642  + for nvme in "${!nvme_files[@]}"
00:01:16.642  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G
00:01:16.642  Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc
00:01:16.642  + for nvme in "${!nvme_files[@]}"
00:01:16.642  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G
00:01:16.901  Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc
00:01:16.901  + for nvme in "${!nvme_files[@]}"
00:01:16.901  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G
00:01:16.901  Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc
00:01:16.901  ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu
00:01:16.901  + echo 'End stage prepare_nvme.sh'
00:01:16.902  End stage prepare_nvme.sh
00:01:16.913  [Pipeline] sh
00:01:17.252  + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh
00:01:17.252  Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39
00:01:17.252  
00:01:17.252  DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant
00:01:17.252  SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk
00:01:17.252  VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest
00:01:17.252  HELP=0
00:01:17.252  DRY_RUN=0
00:01:17.252  NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,
00:01:17.252  NVME_DISKS_TYPE=nvme,nvme,
00:01:17.252  NVME_AUTO_CREATE=0
00:01:17.252  NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,
00:01:17.252  NVME_CMB=,,
00:01:17.252  NVME_PMR=,,
00:01:17.252  NVME_ZNS=,,
00:01:17.252  NVME_MS=,,
00:01:17.252  NVME_FDP=,,
00:01:17.252  SPDK_VAGRANT_DISTRO=fedora39
00:01:17.252  SPDK_VAGRANT_VMCPU=10
00:01:17.252  SPDK_VAGRANT_VMRAM=12288
00:01:17.252  SPDK_VAGRANT_PROVIDER=libvirt
00:01:17.252  SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911
00:01:17.252  SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64
00:01:17.252  SPDK_OPENSTACK_NETWORK=0
00:01:17.252  VAGRANT_PACKAGE_BOX=0
00:01:17.252  VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile
00:01:17.252  FORCE_DISTRO=true
00:01:17.252  VAGRANT_BOX_VERSION=
00:01:17.252  EXTRA_VAGRANTFILES=
00:01:17.252  NIC_MODEL=e1000
00:01:17.252  
00:01:17.252  mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt'
00:01:17.252  /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest
00:01:20.536  Bringing machine 'default' up with 'libvirt' provider...
00:01:20.795  ==> default: Creating image (snapshot of base box volume).
00:01:21.053  ==> default: Creating domain with the following settings...
00:01:21.053  ==> default:  -- Name:              fedora39-39-1.5-1721788873-2326_default_1734329497_3f16aa9f6f772a1cbcc5
00:01:21.053  ==> default:  -- Domain type:       kvm
00:01:21.053  ==> default:  -- Cpus:              10
00:01:21.053  ==> default:  -- Feature:           acpi
00:01:21.053  ==> default:  -- Feature:           apic
00:01:21.053  ==> default:  -- Feature:           pae
00:01:21.053  ==> default:  -- Memory:            12288M
00:01:21.053  ==> default:  -- Memory Backing:    hugepages: 
00:01:21.053  ==> default:  -- Management MAC:    
00:01:21.053  ==> default:  -- Loader:            
00:01:21.053  ==> default:  -- Nvram:             
00:01:21.053  ==> default:  -- Base box:          spdk/fedora39
00:01:21.053  ==> default:  -- Storage pool:      default
00:01:21.054  ==> default:  -- Image:             /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734329497_3f16aa9f6f772a1cbcc5.img (20G)
00:01:21.054  ==> default:  -- Volume Cache:      default
00:01:21.054  ==> default:  -- Kernel:            
00:01:21.054  ==> default:  -- Initrd:            
00:01:21.054  ==> default:  -- Graphics Type:     vnc
00:01:21.054  ==> default:  -- Graphics Port:     -1
00:01:21.054  ==> default:  -- Graphics IP:       127.0.0.1
00:01:21.054  ==> default:  -- Graphics Password: Not defined
00:01:21.054  ==> default:  -- Video Type:        cirrus
00:01:21.054  ==> default:  -- Video VRAM:        9216
00:01:21.054  ==> default:  -- Sound Type:	
00:01:21.054  ==> default:  -- Keymap:            en-us
00:01:21.054  ==> default:  -- TPM Path:          
00:01:21.054  ==> default:  -- INPUT:             type=mouse, bus=ps2
00:01:21.054  ==> default:  -- Command line args: 
00:01:21.054  ==> default:     -> value=-device, 
00:01:21.054  ==> default:     -> value=nvme,id=nvme-0,serial=12340, 
00:01:21.054  ==> default:     -> value=-drive, 
00:01:21.054  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 
00:01:21.054  ==> default:     -> value=-device, 
00:01:21.054  ==> default:     -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:01:21.054  ==> default:     -> value=-device, 
00:01:21.054  ==> default:     -> value=nvme,id=nvme-1,serial=12341, 
00:01:21.054  ==> default:     -> value=-drive, 
00:01:21.054  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 
00:01:21.054  ==> default:     -> value=-device, 
00:01:21.054  ==> default:     -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:01:21.054  ==> default:     -> value=-drive, 
00:01:21.054  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 
00:01:21.054  ==> default:     -> value=-device, 
00:01:21.054  ==> default:     -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:01:21.054  ==> default:     -> value=-drive, 
00:01:21.054  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 
00:01:21.054  ==> default:     -> value=-device, 
00:01:21.054  ==> default:     -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:01:21.312  ==> default: Creating shared folders metadata...
00:01:21.312  ==> default: Starting domain.
00:01:23.217  ==> default: Waiting for domain to get an IP address...
00:01:38.139  ==> default: Waiting for SSH to become available...
00:01:39.074  ==> default: Configuring and enabling network interfaces...
00:01:44.345      default: SSH address: 192.168.121.149:22
00:01:44.345      default: SSH username: vagrant
00:01:44.345      default: SSH auth method: private key
00:01:46.248  ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk
00:01:54.365  ==> default: Mounting SSHFS shared folder...
00:01:55.742  ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output
00:01:55.742  ==> default: Checking Mount..
00:01:57.120  ==> default: Folder Successfully Mounted!
00:01:57.120  ==> default: Running provisioner: file...
00:01:58.058      default: ~/.gitconfig => .gitconfig
00:01:58.317  
00:01:58.317    SUCCESS!
00:01:58.317  
00:01:58.317    cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use.
00:01:58.317    Use vagrant "suspend" and vagrant "resume" to stop and start.
00:01:58.317    Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm.
00:01:58.317  
00:01:58.326  [Pipeline] }
00:01:58.339  [Pipeline] // stage
00:01:58.347  [Pipeline] dir
00:01:58.347  Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt
00:01:58.349  [Pipeline] {
00:01:58.360  [Pipeline] catchError
00:01:58.361  [Pipeline] {
00:01:58.372  [Pipeline] sh
00:01:58.740  + vagrant ssh-config --host vagrant
00:01:58.740  + sed -ne /^Host/,$p
00:01:58.740  + tee ssh_conf
00:02:01.275  Host vagrant
00:02:01.275    HostName 192.168.121.149
00:02:01.275    User vagrant
00:02:01.275    Port 22
00:02:01.275    UserKnownHostsFile /dev/null
00:02:01.275    StrictHostKeyChecking no
00:02:01.275    PasswordAuthentication no
00:02:01.275    IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39
00:02:01.275    IdentitiesOnly yes
00:02:01.275    LogLevel FATAL
00:02:01.275    ForwardAgent yes
00:02:01.275    ForwardX11 yes
00:02:01.275  
00:02:01.289  [Pipeline] withEnv
00:02:01.291  [Pipeline] {
00:02:01.304  [Pipeline] sh
00:02:01.585  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash
00:02:01.585  		source /etc/os-release
00:02:01.585  		[[ -e /image.version ]] && img=$(< /image.version)
00:02:01.585  		# Minimal, systemd-like check.
00:02:01.585  		if [[ -e /.dockerenv ]]; then
00:02:01.585  			# Clear garbage from the node's name:
00:02:01.585  			#  agt-er_autotest_547-896 -> autotest_547-896
00:02:01.585  			#  $HOSTNAME is the actual container id
00:02:01.585  			agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_}
00:02:01.585  			if grep -q "/etc/hostname" /proc/self/mountinfo; then
00:02:01.585  				# We can assume this is a mount from a host where container is running,
00:02:01.585  				# so fetch its hostname to easily identify the target swarm worker.
00:02:01.585  				container="$(< /etc/hostname) ($agent)"
00:02:01.585  			else
00:02:01.585  				# Fallback
00:02:01.585  				container=$agent
00:02:01.585  			fi
00:02:01.585  		fi
00:02:01.585  		echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}"
00:02:01.585  
00:02:01.855  [Pipeline] }
00:02:01.872  [Pipeline] // withEnv
00:02:01.880  [Pipeline] setCustomBuildProperty
00:02:01.896  [Pipeline] stage
00:02:01.898  [Pipeline] { (Tests)
00:02:01.915  [Pipeline] sh
00:02:02.196  + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./
00:02:02.470  [Pipeline] sh
00:02:02.751  + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./
00:02:03.026  [Pipeline] timeout
00:02:03.026  Timeout set to expire in 1 hr 0 min
00:02:03.028  [Pipeline] {
00:02:03.043  [Pipeline] sh
00:02:03.324  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard
00:02:03.892  HEAD is now at c13c99a5e test: Various fixes for Fedora40
00:02:03.904  [Pipeline] sh
00:02:04.185  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo
00:02:04.457  [Pipeline] sh
00:02:04.737  + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo
00:02:05.011  [Pipeline] sh
00:02:05.292  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo
00:02:05.551  ++ readlink -f spdk_repo
00:02:05.551  + DIR_ROOT=/home/vagrant/spdk_repo
00:02:05.551  + [[ -n /home/vagrant/spdk_repo ]]
00:02:05.551  + DIR_SPDK=/home/vagrant/spdk_repo/spdk
00:02:05.551  + DIR_OUTPUT=/home/vagrant/spdk_repo/output
00:02:05.551  + [[ -d /home/vagrant/spdk_repo/spdk ]]
00:02:05.551  + [[ ! -d /home/vagrant/spdk_repo/output ]]
00:02:05.551  + [[ -d /home/vagrant/spdk_repo/output ]]
00:02:05.551  + [[ nvmf-tcp-vg-autotest == pkgdep-* ]]
00:02:05.551  + cd /home/vagrant/spdk_repo
00:02:05.551  + source /etc/os-release
00:02:05.551  ++ NAME='Fedora Linux'
00:02:05.551  ++ VERSION='39 (Cloud Edition)'
00:02:05.551  ++ ID=fedora
00:02:05.551  ++ VERSION_ID=39
00:02:05.551  ++ VERSION_CODENAME=
00:02:05.551  ++ PLATFORM_ID=platform:f39
00:02:05.551  ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)'
00:02:05.551  ++ ANSI_COLOR='0;38;2;60;110;180'
00:02:05.551  ++ LOGO=fedora-logo-icon
00:02:05.551  ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39
00:02:05.551  ++ HOME_URL=https://fedoraproject.org/
00:02:05.551  ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/
00:02:05.551  ++ SUPPORT_URL=https://ask.fedoraproject.org/
00:02:05.551  ++ BUG_REPORT_URL=https://bugzilla.redhat.com/
00:02:05.551  ++ REDHAT_BUGZILLA_PRODUCT=Fedora
00:02:05.551  ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39
00:02:05.551  ++ REDHAT_SUPPORT_PRODUCT=Fedora
00:02:05.551  ++ REDHAT_SUPPORT_PRODUCT_VERSION=39
00:02:05.551  ++ SUPPORT_END=2024-11-12
00:02:05.551  ++ VARIANT='Cloud Edition'
00:02:05.551  ++ VARIANT_ID=cloud
00:02:05.551  + uname -a
00:02:05.551  Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux
00:02:05.551  + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:02:05.551  Hugepages
00:02:05.551  node     hugesize     free /  total
00:02:05.551  node0   1048576kB        0 /      0
00:02:05.551  node0      2048kB        0 /      0
00:02:05.551  
00:02:05.551  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:02:05.551  virtio                    0000:00:03.0    1af4   1001   unknown virtio-pci       -          vda
00:02:05.551  NVMe                      0000:00:06.0    1b36   0010   unknown nvme             nvme1      nvme1n1
00:02:05.810  NVMe                      0000:00:07.0    1b36   0010   unknown nvme             nvme0      nvme0n1 nvme0n2 nvme0n3
00:02:05.810  + rm -f /tmp/spdk-ld-path
00:02:05.810  + source autorun-spdk.conf
00:02:05.810  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:02:05.810  ++ SPDK_TEST_NVMF=1
00:02:05.810  ++ SPDK_TEST_NVMF_TRANSPORT=tcp
00:02:05.810  ++ SPDK_TEST_VFIOUSER=1
00:02:05.810  ++ SPDK_TEST_USDT=1
00:02:05.810  ++ SPDK_RUN_UBSAN=1
00:02:05.810  ++ SPDK_TEST_NVMF_MDNS=1
00:02:05.810  ++ NET_TYPE=virt
00:02:05.810  ++ SPDK_JSONRPC_GO_CLIENT=1
00:02:05.810  ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:02:05.810  ++ RUN_NIGHTLY=1
00:02:05.810  + ((  SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1  ))
00:02:05.810  + [[ -n '' ]]
00:02:05.810  + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk
00:02:05.810  + for M in /var/spdk/build-*-manifest.txt
00:02:05.810  + [[ -f /var/spdk/build-kernel-manifest.txt ]]
00:02:05.810  + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/
00:02:05.810  + for M in /var/spdk/build-*-manifest.txt
00:02:05.810  + [[ -f /var/spdk/build-pkg-manifest.txt ]]
00:02:05.810  + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/
00:02:05.810  + for M in /var/spdk/build-*-manifest.txt
00:02:05.810  + [[ -f /var/spdk/build-repo-manifest.txt ]]
00:02:05.810  + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/
00:02:05.810  ++ uname
00:02:05.810  + [[ Linux == \L\i\n\u\x ]]
00:02:05.810  + sudo dmesg -T
00:02:05.810  + sudo dmesg --clear
00:02:05.810  + dmesg_pid=5233
00:02:05.810  + [[ Fedora Linux == FreeBSD ]]
00:02:05.810  + export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:02:05.810  + UNBIND_ENTIRE_IOMMU_GROUP=yes
00:02:05.810  + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:02:05.810  + sudo dmesg -Tw
00:02:05.810  + [[ -x /usr/src/fio-static/fio ]]
00:02:05.810  + export FIO_BIN=/usr/src/fio-static/fio
00:02:05.810  + FIO_BIN=/usr/src/fio-static/fio
00:02:05.810  + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]]
00:02:05.810  + [[ ! -v VFIO_QEMU_BIN ]]
00:02:05.810  + [[ -e /usr/local/qemu/vfio-user-latest ]]
00:02:05.810  + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:02:05.810  + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:02:05.810  + [[ -e /usr/local/qemu/vanilla-latest ]]
00:02:05.810  + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:02:05.810  + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:02:05.810  + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:02:05.810  Test configuration:
00:02:05.810  SPDK_RUN_FUNCTIONAL_TEST=1
00:02:05.810  SPDK_TEST_NVMF=1
00:02:05.810  SPDK_TEST_NVMF_TRANSPORT=tcp
00:02:05.810  SPDK_TEST_VFIOUSER=1
00:02:05.810  SPDK_TEST_USDT=1
00:02:05.810  SPDK_RUN_UBSAN=1
00:02:05.810  SPDK_TEST_NVMF_MDNS=1
00:02:05.810  NET_TYPE=virt
00:02:05.810  SPDK_JSONRPC_GO_CLIENT=1
00:02:05.810  SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:02:05.810  RUN_NIGHTLY=1   06:12:22	-- common/autotest_common.sh@1689 -- $ [[ n == y ]]
00:02:05.810    06:12:22	-- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:02:05.810     06:12:22	-- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]]
00:02:05.810     06:12:22	-- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:02:05.810     06:12:22	-- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:02:05.810      06:12:22	-- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:05.810      06:12:22	-- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:05.810      06:12:22	-- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:05.810      06:12:22	-- paths/export.sh@5 -- $ export PATH
00:02:05.810      06:12:22	-- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:05.810    06:12:22	-- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output
00:02:05.810      06:12:22	-- common/autobuild_common.sh@440 -- $ date +%s
00:02:05.810     06:12:22	-- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734329542.XXXXXX
00:02:06.069    06:12:22	-- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734329542.yGnBn2
00:02:06.070    06:12:22	-- common/autobuild_common.sh@442 -- $ [[ -n '' ]]
00:02:06.070    06:12:22	-- common/autobuild_common.sh@446 -- $ '[' -n '' ']'
00:02:06.070    06:12:22	-- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/'
00:02:06.070    06:12:22	-- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp'
00:02:06.070    06:12:22	-- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs'
00:02:06.070     06:12:22	-- common/autobuild_common.sh@456 -- $ get_config_params
00:02:06.070     06:12:22	-- common/autotest_common.sh@397 -- $ xtrace_disable
00:02:06.070     06:12:22	-- common/autotest_common.sh@10 -- $ set +x
00:02:06.070    06:12:22	-- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang'
00:02:06.070   06:12:22	-- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD=
00:02:06.070   06:12:22	-- spdk/autobuild.sh@12 -- $ umask 022
00:02:06.070   06:12:22	-- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk
00:02:06.070   06:12:22	-- spdk/autobuild.sh@16 -- $ date -u
00:02:06.070  Mon Dec 16 06:12:22 AM UTC 2024
00:02:06.070   06:12:22	-- spdk/autobuild.sh@17 -- $ git describe --tags
00:02:06.070  LTS-67-gc13c99a5e
00:02:06.070   06:12:22	-- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']'
00:02:06.070   06:12:22	-- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']'
00:02:06.070   06:12:22	-- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan'
00:02:06.070   06:12:22	-- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']'
00:02:06.070   06:12:22	-- common/autotest_common.sh@1093 -- $ xtrace_disable
00:02:06.070   06:12:22	-- common/autotest_common.sh@10 -- $ set +x
00:02:06.070  ************************************
00:02:06.070  START TEST ubsan
00:02:06.070  ************************************
00:02:06.070  using ubsan
00:02:06.070   06:12:22	-- common/autotest_common.sh@1114 -- $ echo 'using ubsan'
00:02:06.070  
00:02:06.070  real	0m0.000s
00:02:06.070  user	0m0.000s
00:02:06.070  sys	0m0.000s
00:02:06.070   06:12:22	-- common/autotest_common.sh@1115 -- $ xtrace_disable
00:02:06.070  ************************************
00:02:06.070  END TEST ubsan
00:02:06.070   06:12:22	-- common/autotest_common.sh@10 -- $ set +x
00:02:06.070  ************************************
00:02:06.070   06:12:22	-- spdk/autobuild.sh@27 -- $ '[' -n '' ']'
00:02:06.070   06:12:22	-- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in
00:02:06.070   06:12:22	-- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]]
00:02:06.070   06:12:22	-- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]]
00:02:06.070   06:12:22	-- spdk/autobuild.sh@55 -- $ [[ -n '' ]]
00:02:06.070   06:12:22	-- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]]
00:02:06.070   06:12:22	-- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]]
00:02:06.070   06:12:22	-- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]]
00:02:06.070   06:12:22	-- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang --with-shared
00:02:06.329  Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:02:06.329  Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build
00:02:06.587  Using 'verbs' RDMA provider
00:02:22.039  Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done.
00:02:34.260  Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done.
00:02:34.260  go version go1.21.1 linux/amd64
00:02:34.260  Creating mk/config.mk...done.
00:02:34.260  Creating mk/cc.flags.mk...done.
00:02:34.260  Type 'make' to build.
00:02:34.260   06:12:50	-- spdk/autobuild.sh@69 -- $ run_test make make -j10
00:02:34.260   06:12:50	-- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']'
00:02:34.260   06:12:50	-- common/autotest_common.sh@1093 -- $ xtrace_disable
00:02:34.260   06:12:50	-- common/autotest_common.sh@10 -- $ set +x
00:02:34.260  ************************************
00:02:34.260  START TEST make
00:02:34.260  ************************************
00:02:34.260   06:12:50	-- common/autotest_common.sh@1114 -- $ make -j10
00:02:34.260  make[1]: Nothing to be done for 'all'.
00:02:34.827  The Meson build system
00:02:34.827  Version: 1.5.0
00:02:34.827  Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user
00:02:34.827  Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug
00:02:34.827  Build type: native build
00:02:34.827  Project name: libvfio-user
00:02:34.827  Project version: 0.0.1
00:02:34.827  C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:02:34.827  C linker for the host machine: cc ld.bfd 2.40-14
00:02:34.827  Host machine cpu family: x86_64
00:02:34.827  Host machine cpu: x86_64
00:02:34.827  Run-time dependency threads found: YES
00:02:34.827  Library dl found: YES
00:02:34.827  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:02:34.827  Run-time dependency json-c found: YES 0.17
00:02:34.827  Run-time dependency cmocka found: YES 1.1.7
00:02:34.827  Program pytest-3 found: NO
00:02:34.827  Program flake8 found: NO
00:02:34.827  Program misspell-fixer found: NO
00:02:34.827  Program restructuredtext-lint found: NO
00:02:34.827  Program valgrind found: YES (/usr/bin/valgrind)
00:02:34.827  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:02:34.827  Compiler for C supports arguments -Wmissing-declarations: YES 
00:02:34.827  Compiler for C supports arguments -Wwrite-strings: YES 
00:02:34.827  ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup.
00:02:34.827  Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh)
00:02:34.827  Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh)
00:02:34.827  ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup.
00:02:34.827  Build targets in project: 8
00:02:34.827  WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions:
00:02:34.827   * 0.57.0: {'exclude_suites arg in add_test_setup'}
00:02:34.827  
00:02:34.827  libvfio-user 0.0.1
00:02:34.827  
00:02:34.827    User defined options
00:02:34.827      buildtype      : debug
00:02:34.827      default_library: shared
00:02:34.827      libdir         : /usr/local/lib
00:02:34.827  
00:02:34.827  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:02:35.394  ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug'
00:02:35.652  [1/37] Compiling C object samples/client.p/.._lib_tran.c.o
00:02:35.652  [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o
00:02:35.652  [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o
00:02:35.652  [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o
00:02:35.652  [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o
00:02:35.652  [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o
00:02:35.652  [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o
00:02:35.652  [8/37] Compiling C object samples/client.p/.._lib_migration.c.o
00:02:35.652  [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o
00:02:35.652  [10/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o
00:02:35.652  [11/37] Compiling C object samples/lspci.p/lspci.c.o
00:02:35.652  [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o
00:02:35.652  [13/37] Compiling C object samples/null.p/null.c.o
00:02:35.652  [14/37] Compiling C object samples/client.p/client.c.o
00:02:35.911  [15/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o
00:02:35.911  [16/37] Compiling C object samples/server.p/server.c.o
00:02:35.911  [17/37] Linking target samples/client
00:02:35.911  [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o
00:02:35.911  [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o
00:02:35.911  [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o
00:02:35.911  [21/37] Compiling C object test/unit_tests.p/mocks.c.o
00:02:35.911  [22/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o
00:02:35.911  [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o
00:02:35.911  [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o
00:02:35.911  [25/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o
00:02:35.911  [26/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o
00:02:35.911  [27/37] Linking target lib/libvfio-user.so.0.0.1
00:02:35.911  [28/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o
00:02:35.911  [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o
00:02:36.170  [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o
00:02:36.170  [31/37] Linking target test/unit_tests
00:02:36.170  [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols
00:02:36.170  [33/37] Linking target samples/server
00:02:36.170  [34/37] Linking target samples/gpio-pci-idio-16
00:02:36.170  [35/37] Linking target samples/lspci
00:02:36.170  [36/37] Linking target samples/null
00:02:36.170  [37/37] Linking target samples/shadow_ioeventfd_server
00:02:36.170  INFO: autodetecting backend as ninja
00:02:36.170  INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug
00:02:36.170  DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug
00:02:36.737  ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug'
00:02:36.737  ninja: no work to do.
00:02:44.850  The Meson build system
00:02:44.850  Version: 1.5.0
00:02:44.850  Source dir: /home/vagrant/spdk_repo/spdk/dpdk
00:02:44.850  Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp
00:02:44.850  Build type: native build
00:02:44.850  Program cat found: YES (/usr/bin/cat)
00:02:44.850  Project name: DPDK
00:02:44.850  Project version: 23.11.0
00:02:44.850  C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:02:44.850  C linker for the host machine: cc ld.bfd 2.40-14
00:02:44.850  Host machine cpu family: x86_64
00:02:44.850  Host machine cpu: x86_64
00:02:44.850  Message: ## Building in Developer Mode ##
00:02:44.850  Program pkg-config found: YES (/usr/bin/pkg-config)
00:02:44.850  Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh)
00:02:44.850  Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh)
00:02:44.850  Program python3 found: YES (/usr/bin/python3)
00:02:44.850  Program cat found: YES (/usr/bin/cat)
00:02:44.850  Compiler for C supports arguments -march=native: YES 
00:02:44.850  Checking for size of "void *" : 8 
00:02:44.850  Checking for size of "void *" : 8 (cached)
00:02:44.850  Library m found: YES
00:02:44.850  Library numa found: YES
00:02:44.850  Has header "numaif.h" : YES 
00:02:44.850  Library fdt found: NO
00:02:44.850  Library execinfo found: NO
00:02:44.850  Has header "execinfo.h" : YES 
00:02:44.850  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:02:44.850  Run-time dependency libarchive found: NO (tried pkgconfig)
00:02:44.850  Run-time dependency libbsd found: NO (tried pkgconfig)
00:02:44.850  Run-time dependency jansson found: NO (tried pkgconfig)
00:02:44.850  Run-time dependency openssl found: YES 3.1.1
00:02:44.850  Run-time dependency libpcap found: YES 1.10.4
00:02:44.850  Has header "pcap.h" with dependency libpcap: YES 
00:02:44.850  Compiler for C supports arguments -Wcast-qual: YES 
00:02:44.850  Compiler for C supports arguments -Wdeprecated: YES 
00:02:44.850  Compiler for C supports arguments -Wformat: YES 
00:02:44.850  Compiler for C supports arguments -Wformat-nonliteral: NO 
00:02:44.850  Compiler for C supports arguments -Wformat-security: NO 
00:02:44.851  Compiler for C supports arguments -Wmissing-declarations: YES 
00:02:44.851  Compiler for C supports arguments -Wmissing-prototypes: YES 
00:02:44.851  Compiler for C supports arguments -Wnested-externs: YES 
00:02:44.851  Compiler for C supports arguments -Wold-style-definition: YES 
00:02:44.851  Compiler for C supports arguments -Wpointer-arith: YES 
00:02:44.851  Compiler for C supports arguments -Wsign-compare: YES 
00:02:44.851  Compiler for C supports arguments -Wstrict-prototypes: YES 
00:02:44.851  Compiler for C supports arguments -Wundef: YES 
00:02:44.851  Compiler for C supports arguments -Wwrite-strings: YES 
00:02:44.851  Compiler for C supports arguments -Wno-address-of-packed-member: YES 
00:02:44.851  Compiler for C supports arguments -Wno-packed-not-aligned: YES 
00:02:44.851  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:02:44.851  Compiler for C supports arguments -Wno-zero-length-bounds: YES 
00:02:44.851  Program objdump found: YES (/usr/bin/objdump)
00:02:44.851  Compiler for C supports arguments -mavx512f: YES 
00:02:44.851  Checking if "AVX512 checking" compiles: YES 
00:02:44.851  Fetching value of define "__SSE4_2__" : 1 
00:02:44.851  Fetching value of define "__AES__" : 1 
00:02:44.851  Fetching value of define "__AVX__" : 1 
00:02:44.851  Fetching value of define "__AVX2__" : 1 
00:02:44.851  Fetching value of define "__AVX512BW__" : (undefined) 
00:02:44.851  Fetching value of define "__AVX512CD__" : (undefined) 
00:02:44.851  Fetching value of define "__AVX512DQ__" : (undefined) 
00:02:44.851  Fetching value of define "__AVX512F__" : (undefined) 
00:02:44.851  Fetching value of define "__AVX512VL__" : (undefined) 
00:02:44.851  Fetching value of define "__PCLMUL__" : 1 
00:02:44.851  Fetching value of define "__RDRND__" : 1 
00:02:44.851  Fetching value of define "__RDSEED__" : 1 
00:02:44.851  Fetching value of define "__VPCLMULQDQ__" : (undefined) 
00:02:44.851  Fetching value of define "__znver1__" : (undefined) 
00:02:44.851  Fetching value of define "__znver2__" : (undefined) 
00:02:44.851  Fetching value of define "__znver3__" : (undefined) 
00:02:44.851  Fetching value of define "__znver4__" : (undefined) 
00:02:44.851  Compiler for C supports arguments -Wno-format-truncation: YES 
00:02:44.851  Message: lib/log: Defining dependency "log"
00:02:44.851  Message: lib/kvargs: Defining dependency "kvargs"
00:02:44.851  Message: lib/telemetry: Defining dependency "telemetry"
00:02:44.851  Checking for function "getentropy" : NO 
00:02:44.851  Message: lib/eal: Defining dependency "eal"
00:02:44.851  Message: lib/ring: Defining dependency "ring"
00:02:44.851  Message: lib/rcu: Defining dependency "rcu"
00:02:44.851  Message: lib/mempool: Defining dependency "mempool"
00:02:44.851  Message: lib/mbuf: Defining dependency "mbuf"
00:02:44.851  Fetching value of define "__PCLMUL__" : 1 (cached)
00:02:44.851  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:02:44.851  Compiler for C supports arguments -mpclmul: YES 
00:02:44.851  Compiler for C supports arguments -maes: YES 
00:02:44.851  Compiler for C supports arguments -mavx512f: YES (cached)
00:02:44.851  Compiler for C supports arguments -mavx512bw: YES 
00:02:44.851  Compiler for C supports arguments -mavx512dq: YES 
00:02:44.851  Compiler for C supports arguments -mavx512vl: YES 
00:02:44.851  Compiler for C supports arguments -mvpclmulqdq: YES 
00:02:44.851  Compiler for C supports arguments -mavx2: YES 
00:02:44.851  Compiler for C supports arguments -mavx: YES 
00:02:44.851  Message: lib/net: Defining dependency "net"
00:02:44.851  Message: lib/meter: Defining dependency "meter"
00:02:44.851  Message: lib/ethdev: Defining dependency "ethdev"
00:02:44.851  Message: lib/pci: Defining dependency "pci"
00:02:44.851  Message: lib/cmdline: Defining dependency "cmdline"
00:02:44.851  Message: lib/hash: Defining dependency "hash"
00:02:44.851  Message: lib/timer: Defining dependency "timer"
00:02:44.851  Message: lib/compressdev: Defining dependency "compressdev"
00:02:44.851  Message: lib/cryptodev: Defining dependency "cryptodev"
00:02:44.851  Message: lib/dmadev: Defining dependency "dmadev"
00:02:44.851  Compiler for C supports arguments -Wno-cast-qual: YES 
00:02:44.851  Message: lib/power: Defining dependency "power"
00:02:44.851  Message: lib/reorder: Defining dependency "reorder"
00:02:44.851  Message: lib/security: Defining dependency "security"
00:02:44.851  Has header "linux/userfaultfd.h" : YES 
00:02:44.851  Has header "linux/vduse.h" : YES 
00:02:44.851  Message: lib/vhost: Defining dependency "vhost"
00:02:44.851  Compiler for C supports arguments -Wno-format-truncation: YES (cached)
00:02:44.851  Message: drivers/bus/pci: Defining dependency "bus_pci"
00:02:44.851  Message: drivers/bus/vdev: Defining dependency "bus_vdev"
00:02:44.851  Message: drivers/mempool/ring: Defining dependency "mempool_ring"
00:02:44.851  Message: Disabling raw/* drivers: missing internal dependency "rawdev"
00:02:44.851  Message: Disabling regex/* drivers: missing internal dependency "regexdev"
00:02:44.851  Message: Disabling ml/* drivers: missing internal dependency "mldev"
00:02:44.851  Message: Disabling event/* drivers: missing internal dependency "eventdev"
00:02:44.851  Message: Disabling baseband/* drivers: missing internal dependency "bbdev"
00:02:44.851  Message: Disabling gpu/* drivers: missing internal dependency "gpudev"
00:02:44.851  Program doxygen found: YES (/usr/local/bin/doxygen)
00:02:44.851  Configuring doxy-api-html.conf using configuration
00:02:44.851  Configuring doxy-api-man.conf using configuration
00:02:44.851  Program mandb found: YES (/usr/bin/mandb)
00:02:44.851  Program sphinx-build found: NO
00:02:44.851  Configuring rte_build_config.h using configuration
00:02:44.851  Message: 
00:02:44.851  =================
00:02:44.851  Applications Enabled
00:02:44.851  =================
00:02:44.851  
00:02:44.851  apps:
00:02:44.851  	
00:02:44.851  
00:02:44.851  Message: 
00:02:44.851  =================
00:02:44.851  Libraries Enabled
00:02:44.851  =================
00:02:44.851  
00:02:44.851  libs:
00:02:44.851  	log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 
00:02:44.851  	net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 
00:02:44.851  	cryptodev, dmadev, power, reorder, security, vhost, 
00:02:44.851  
00:02:44.851  Message: 
00:02:44.851  ===============
00:02:44.851  Drivers Enabled
00:02:44.851  ===============
00:02:44.851  
00:02:44.851  common:
00:02:44.851  	
00:02:44.851  bus:
00:02:44.851  	pci, vdev, 
00:02:44.851  mempool:
00:02:44.851  	ring, 
00:02:44.851  dma:
00:02:44.851  	
00:02:44.851  net:
00:02:44.851  	
00:02:44.851  crypto:
00:02:44.851  	
00:02:44.851  compress:
00:02:44.851  	
00:02:44.851  vdpa:
00:02:44.851  	
00:02:44.851  
00:02:44.851  Message: 
00:02:44.851  =================
00:02:44.851  Content Skipped
00:02:44.851  =================
00:02:44.851  
00:02:44.851  apps:
00:02:44.851  	dumpcap:	explicitly disabled via build config
00:02:44.851  	graph:	explicitly disabled via build config
00:02:44.851  	pdump:	explicitly disabled via build config
00:02:44.851  	proc-info:	explicitly disabled via build config
00:02:44.851  	test-acl:	explicitly disabled via build config
00:02:44.851  	test-bbdev:	explicitly disabled via build config
00:02:44.851  	test-cmdline:	explicitly disabled via build config
00:02:44.851  	test-compress-perf:	explicitly disabled via build config
00:02:44.851  	test-crypto-perf:	explicitly disabled via build config
00:02:44.851  	test-dma-perf:	explicitly disabled via build config
00:02:44.851  	test-eventdev:	explicitly disabled via build config
00:02:44.851  	test-fib:	explicitly disabled via build config
00:02:44.851  	test-flow-perf:	explicitly disabled via build config
00:02:44.851  	test-gpudev:	explicitly disabled via build config
00:02:44.851  	test-mldev:	explicitly disabled via build config
00:02:44.851  	test-pipeline:	explicitly disabled via build config
00:02:44.851  	test-pmd:	explicitly disabled via build config
00:02:44.851  	test-regex:	explicitly disabled via build config
00:02:44.851  	test-sad:	explicitly disabled via build config
00:02:44.851  	test-security-perf:	explicitly disabled via build config
00:02:44.851  	
00:02:44.851  libs:
00:02:44.851  	metrics:	explicitly disabled via build config
00:02:44.851  	acl:	explicitly disabled via build config
00:02:44.851  	bbdev:	explicitly disabled via build config
00:02:44.851  	bitratestats:	explicitly disabled via build config
00:02:44.851  	bpf:	explicitly disabled via build config
00:02:44.851  	cfgfile:	explicitly disabled via build config
00:02:44.851  	distributor:	explicitly disabled via build config
00:02:44.851  	efd:	explicitly disabled via build config
00:02:44.851  	eventdev:	explicitly disabled via build config
00:02:44.851  	dispatcher:	explicitly disabled via build config
00:02:44.851  	gpudev:	explicitly disabled via build config
00:02:44.851  	gro:	explicitly disabled via build config
00:02:44.851  	gso:	explicitly disabled via build config
00:02:44.851  	ip_frag:	explicitly disabled via build config
00:02:44.851  	jobstats:	explicitly disabled via build config
00:02:44.852  	latencystats:	explicitly disabled via build config
00:02:44.852  	lpm:	explicitly disabled via build config
00:02:44.852  	member:	explicitly disabled via build config
00:02:44.852  	pcapng:	explicitly disabled via build config
00:02:44.852  	rawdev:	explicitly disabled via build config
00:02:44.852  	regexdev:	explicitly disabled via build config
00:02:44.852  	mldev:	explicitly disabled via build config
00:02:44.852  	rib:	explicitly disabled via build config
00:02:44.852  	sched:	explicitly disabled via build config
00:02:44.852  	stack:	explicitly disabled via build config
00:02:44.852  	ipsec:	explicitly disabled via build config
00:02:44.852  	pdcp:	explicitly disabled via build config
00:02:44.852  	fib:	explicitly disabled via build config
00:02:44.852  	port:	explicitly disabled via build config
00:02:44.852  	pdump:	explicitly disabled via build config
00:02:44.852  	table:	explicitly disabled via build config
00:02:44.852  	pipeline:	explicitly disabled via build config
00:02:44.852  	graph:	explicitly disabled via build config
00:02:44.852  	node:	explicitly disabled via build config
00:02:44.852  	
00:02:44.852  drivers:
00:02:44.852  	common/cpt:	not in enabled drivers build config
00:02:44.852  	common/dpaax:	not in enabled drivers build config
00:02:44.852  	common/iavf:	not in enabled drivers build config
00:02:44.852  	common/idpf:	not in enabled drivers build config
00:02:44.852  	common/mvep:	not in enabled drivers build config
00:02:44.852  	common/octeontx:	not in enabled drivers build config
00:02:44.852  	bus/auxiliary:	not in enabled drivers build config
00:02:44.852  	bus/cdx:	not in enabled drivers build config
00:02:44.852  	bus/dpaa:	not in enabled drivers build config
00:02:44.852  	bus/fslmc:	not in enabled drivers build config
00:02:44.852  	bus/ifpga:	not in enabled drivers build config
00:02:44.852  	bus/platform:	not in enabled drivers build config
00:02:44.852  	bus/vmbus:	not in enabled drivers build config
00:02:44.852  	common/cnxk:	not in enabled drivers build config
00:02:44.852  	common/mlx5:	not in enabled drivers build config
00:02:44.852  	common/nfp:	not in enabled drivers build config
00:02:44.852  	common/qat:	not in enabled drivers build config
00:02:44.852  	common/sfc_efx:	not in enabled drivers build config
00:02:44.852  	mempool/bucket:	not in enabled drivers build config
00:02:44.852  	mempool/cnxk:	not in enabled drivers build config
00:02:44.852  	mempool/dpaa:	not in enabled drivers build config
00:02:44.852  	mempool/dpaa2:	not in enabled drivers build config
00:02:44.852  	mempool/octeontx:	not in enabled drivers build config
00:02:44.852  	mempool/stack:	not in enabled drivers build config
00:02:44.852  	dma/cnxk:	not in enabled drivers build config
00:02:44.852  	dma/dpaa:	not in enabled drivers build config
00:02:44.852  	dma/dpaa2:	not in enabled drivers build config
00:02:44.852  	dma/hisilicon:	not in enabled drivers build config
00:02:44.852  	dma/idxd:	not in enabled drivers build config
00:02:44.852  	dma/ioat:	not in enabled drivers build config
00:02:44.852  	dma/skeleton:	not in enabled drivers build config
00:02:44.852  	net/af_packet:	not in enabled drivers build config
00:02:44.852  	net/af_xdp:	not in enabled drivers build config
00:02:44.852  	net/ark:	not in enabled drivers build config
00:02:44.852  	net/atlantic:	not in enabled drivers build config
00:02:44.852  	net/avp:	not in enabled drivers build config
00:02:44.852  	net/axgbe:	not in enabled drivers build config
00:02:44.852  	net/bnx2x:	not in enabled drivers build config
00:02:44.852  	net/bnxt:	not in enabled drivers build config
00:02:44.852  	net/bonding:	not in enabled drivers build config
00:02:44.852  	net/cnxk:	not in enabled drivers build config
00:02:44.852  	net/cpfl:	not in enabled drivers build config
00:02:44.852  	net/cxgbe:	not in enabled drivers build config
00:02:44.852  	net/dpaa:	not in enabled drivers build config
00:02:44.852  	net/dpaa2:	not in enabled drivers build config
00:02:44.852  	net/e1000:	not in enabled drivers build config
00:02:44.852  	net/ena:	not in enabled drivers build config
00:02:44.852  	net/enetc:	not in enabled drivers build config
00:02:44.852  	net/enetfec:	not in enabled drivers build config
00:02:44.852  	net/enic:	not in enabled drivers build config
00:02:44.852  	net/failsafe:	not in enabled drivers build config
00:02:44.852  	net/fm10k:	not in enabled drivers build config
00:02:44.852  	net/gve:	not in enabled drivers build config
00:02:44.852  	net/hinic:	not in enabled drivers build config
00:02:44.852  	net/hns3:	not in enabled drivers build config
00:02:44.852  	net/i40e:	not in enabled drivers build config
00:02:44.852  	net/iavf:	not in enabled drivers build config
00:02:44.852  	net/ice:	not in enabled drivers build config
00:02:44.852  	net/idpf:	not in enabled drivers build config
00:02:44.852  	net/igc:	not in enabled drivers build config
00:02:44.852  	net/ionic:	not in enabled drivers build config
00:02:44.852  	net/ipn3ke:	not in enabled drivers build config
00:02:44.852  	net/ixgbe:	not in enabled drivers build config
00:02:44.852  	net/mana:	not in enabled drivers build config
00:02:44.852  	net/memif:	not in enabled drivers build config
00:02:44.852  	net/mlx4:	not in enabled drivers build config
00:02:44.852  	net/mlx5:	not in enabled drivers build config
00:02:44.852  	net/mvneta:	not in enabled drivers build config
00:02:44.852  	net/mvpp2:	not in enabled drivers build config
00:02:44.852  	net/netvsc:	not in enabled drivers build config
00:02:44.852  	net/nfb:	not in enabled drivers build config
00:02:44.852  	net/nfp:	not in enabled drivers build config
00:02:44.852  	net/ngbe:	not in enabled drivers build config
00:02:44.852  	net/null:	not in enabled drivers build config
00:02:44.852  	net/octeontx:	not in enabled drivers build config
00:02:44.852  	net/octeon_ep:	not in enabled drivers build config
00:02:44.852  	net/pcap:	not in enabled drivers build config
00:02:44.852  	net/pfe:	not in enabled drivers build config
00:02:44.852  	net/qede:	not in enabled drivers build config
00:02:44.852  	net/ring:	not in enabled drivers build config
00:02:44.852  	net/sfc:	not in enabled drivers build config
00:02:44.852  	net/softnic:	not in enabled drivers build config
00:02:44.852  	net/tap:	not in enabled drivers build config
00:02:44.852  	net/thunderx:	not in enabled drivers build config
00:02:44.852  	net/txgbe:	not in enabled drivers build config
00:02:44.852  	net/vdev_netvsc:	not in enabled drivers build config
00:02:44.852  	net/vhost:	not in enabled drivers build config
00:02:44.852  	net/virtio:	not in enabled drivers build config
00:02:44.852  	net/vmxnet3:	not in enabled drivers build config
00:02:44.852  	raw/*:	missing internal dependency, "rawdev"
00:02:44.852  	crypto/armv8:	not in enabled drivers build config
00:02:44.852  	crypto/bcmfs:	not in enabled drivers build config
00:02:44.852  	crypto/caam_jr:	not in enabled drivers build config
00:02:44.852  	crypto/ccp:	not in enabled drivers build config
00:02:44.852  	crypto/cnxk:	not in enabled drivers build config
00:02:44.852  	crypto/dpaa_sec:	not in enabled drivers build config
00:02:44.852  	crypto/dpaa2_sec:	not in enabled drivers build config
00:02:44.852  	crypto/ipsec_mb:	not in enabled drivers build config
00:02:44.852  	crypto/mlx5:	not in enabled drivers build config
00:02:44.852  	crypto/mvsam:	not in enabled drivers build config
00:02:44.852  	crypto/nitrox:	not in enabled drivers build config
00:02:44.852  	crypto/null:	not in enabled drivers build config
00:02:44.852  	crypto/octeontx:	not in enabled drivers build config
00:02:44.852  	crypto/openssl:	not in enabled drivers build config
00:02:44.852  	crypto/scheduler:	not in enabled drivers build config
00:02:44.852  	crypto/uadk:	not in enabled drivers build config
00:02:44.852  	crypto/virtio:	not in enabled drivers build config
00:02:44.852  	compress/isal:	not in enabled drivers build config
00:02:44.852  	compress/mlx5:	not in enabled drivers build config
00:02:44.852  	compress/octeontx:	not in enabled drivers build config
00:02:44.852  	compress/zlib:	not in enabled drivers build config
00:02:44.852  	regex/*:	missing internal dependency, "regexdev"
00:02:44.852  	ml/*:	missing internal dependency, "mldev"
00:02:44.852  	vdpa/ifc:	not in enabled drivers build config
00:02:44.852  	vdpa/mlx5:	not in enabled drivers build config
00:02:44.852  	vdpa/nfp:	not in enabled drivers build config
00:02:44.852  	vdpa/sfc:	not in enabled drivers build config
00:02:44.852  	event/*:	missing internal dependency, "eventdev"
00:02:44.852  	baseband/*:	missing internal dependency, "bbdev"
00:02:44.852  	gpu/*:	missing internal dependency, "gpudev"
00:02:44.852  	
00:02:44.852  
00:02:44.852  Build targets in project: 85
00:02:44.852  
00:02:44.852  DPDK 23.11.0
00:02:44.852  
00:02:44.852    User defined options
00:02:44.852      buildtype          : debug
00:02:44.852      default_library    : shared
00:02:44.852      libdir             : lib
00:02:44.852      prefix             : /home/vagrant/spdk_repo/spdk/dpdk/build
00:02:44.852      c_args             : -fPIC -Werror  -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds
00:02:44.852      c_link_args        : 
00:02:44.852      cpu_instruction_set: native
00:02:44.852      disable_apps       : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test
00:02:44.852      disable_libs       : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table
00:02:44.852      enable_docs        : false
00:02:44.852      enable_drivers     : bus,bus/pci,bus/vdev,mempool/ring
00:02:44.852      enable_kmods       : false
00:02:44.852      tests              : false
00:02:44.852  
00:02:44.852  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:02:45.520  ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp'
00:02:45.520  [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o
00:02:45.520  [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o
00:02:45.520  [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o
00:02:45.520  [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o
00:02:45.520  [5/265] Linking static target lib/librte_kvargs.a
00:02:45.520  [6/265] Compiling C object lib/librte_log.a.p/log_log.c.o
00:02:45.520  [7/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o
00:02:45.520  [8/265] Linking static target lib/librte_log.a
00:02:45.520  [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o
00:02:45.520  [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o
00:02:46.087  [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output)
00:02:46.087  [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o
00:02:46.087  [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o
00:02:46.087  [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o
00:02:46.345  [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o
00:02:46.345  [16/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o
00:02:46.346  [17/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o
00:02:46.346  [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o
00:02:46.346  [19/265] Linking static target lib/librte_telemetry.a
00:02:46.346  [20/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output)
00:02:46.346  [21/265] Linking target lib/librte_log.so.24.0
00:02:46.606  [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o
00:02:46.606  [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o
00:02:46.606  [24/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols
00:02:46.877  [25/265] Linking target lib/librte_kvargs.so.24.0
00:02:46.877  [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o
00:02:46.877  [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o
00:02:46.877  [28/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols
00:02:47.149  [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o
00:02:47.149  [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o
00:02:47.149  [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o
00:02:47.150  [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o
00:02:47.150  [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o
00:02:47.150  [34/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output)
00:02:47.408  [35/265] Linking target lib/librte_telemetry.so.24.0
00:02:47.408  [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o
00:02:47.408  [37/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o
00:02:47.408  [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o
00:02:47.408  [39/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols
00:02:47.667  [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o
00:02:47.667  [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o
00:02:47.667  [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o
00:02:47.667  [43/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o
00:02:47.667  [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o
00:02:47.925  [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o
00:02:47.925  [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o
00:02:47.925  [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o
00:02:48.184  [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
00:02:48.184  [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o
00:02:48.184  [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o
00:02:48.441  [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o
00:02:48.441  [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o
00:02:48.441  [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o
00:02:48.699  [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o
00:02:48.699  [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o
00:02:48.699  [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
00:02:48.699  [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o
00:02:48.958  [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
00:02:48.958  [59/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o
00:02:48.958  [60/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o
00:02:48.958  [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o
00:02:49.216  [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o
00:02:49.216  [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o
00:02:49.217  [64/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o
00:02:49.217  [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o
00:02:49.475  [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o
00:02:49.475  [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o
00:02:49.475  [68/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o
00:02:49.734  [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o
00:02:49.734  [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o
00:02:49.734  [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o
00:02:49.734  [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o
00:02:49.734  [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o
00:02:49.734  [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o
00:02:49.734  [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o
00:02:49.992  [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o
00:02:49.992  [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o
00:02:49.992  [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o
00:02:50.251  [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o
00:02:50.251  [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o
00:02:50.251  [81/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o
00:02:50.510  [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o
00:02:50.510  [83/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o
00:02:50.768  [84/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o
00:02:50.768  [85/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o
00:02:50.768  [86/265] Linking static target lib/librte_ring.a
00:02:50.768  [87/265] Linking static target lib/librte_eal.a
00:02:51.027  [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o
00:02:51.027  [89/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o
00:02:51.027  [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o
00:02:51.027  [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o
00:02:51.027  [92/265] Linking static target lib/librte_mempool.a
00:02:51.286  [93/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output)
00:02:51.286  [94/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o
00:02:51.286  [95/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o
00:02:51.286  [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o
00:02:51.286  [97/265] Linking static target lib/librte_rcu.a
00:02:51.544  [98/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o
00:02:51.544  [99/265] Linking static target lib/net/libnet_crc_avx512_lib.a
00:02:51.803  [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o
00:02:51.803  [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o
00:02:51.803  [102/265] Linking static target lib/librte_mbuf.a
00:02:51.803  [103/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output)
00:02:52.062  [104/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o
00:02:52.062  [105/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o
00:02:52.062  [106/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o
00:02:52.320  [107/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output)
00:02:52.579  [108/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o
00:02:52.579  [109/265] Linking static target lib/librte_meter.a
00:02:52.579  [110/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o
00:02:52.579  [111/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o
00:02:52.579  [112/265] Linking static target lib/librte_net.a
00:02:52.837  [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o
00:02:52.837  [114/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output)
00:02:53.095  [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o
00:02:53.095  [116/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output)
00:02:53.095  [117/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output)
00:02:53.095  [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o
00:02:53.354  [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o
00:02:53.920  [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o
00:02:53.920  [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o
00:02:53.920  [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o
00:02:54.179  [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o
00:02:54.179  [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o
00:02:54.179  [125/265] Linking static target lib/librte_pci.a
00:02:54.179  [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o
00:02:54.179  [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o
00:02:54.437  [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o
00:02:54.437  [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o
00:02:54.437  [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o
00:02:54.437  [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o
00:02:54.437  [132/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output)
00:02:54.696  [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o
00:02:54.696  [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o
00:02:54.696  [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o
00:02:54.696  [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o
00:02:54.696  [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o
00:02:54.696  [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o
00:02:54.696  [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o
00:02:54.696  [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o
00:02:54.696  [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o
00:02:54.696  [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o
00:02:54.696  [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o
00:02:54.696  [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o
00:02:54.696  [145/265] Linking static target lib/librte_ethdev.a
00:02:54.954  [146/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o
00:02:55.213  [147/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o
00:02:55.213  [148/265] Linking static target lib/librte_cmdline.a
00:02:55.213  [149/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o
00:02:55.213  [150/265] Linking static target lib/librte_timer.a
00:02:55.213  [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o
00:02:55.471  [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o
00:02:55.730  [153/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o
00:02:55.730  [154/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o
00:02:55.730  [155/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o
00:02:55.730  [156/265] Linking static target lib/librte_hash.a
00:02:55.730  [157/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o
00:02:55.730  [158/265] Linking static target lib/librte_compressdev.a
00:02:55.988  [159/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output)
00:02:55.988  [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o
00:02:55.988  [161/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o
00:02:55.988  [162/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o
00:02:56.247  [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o
00:02:56.505  [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o
00:02:56.505  [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o
00:02:56.505  [166/265] Linking static target lib/librte_dmadev.a
00:02:56.505  [167/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:56.505  [168/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o
00:02:56.764  [169/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o
00:02:56.764  [170/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output)
00:02:56.764  [171/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o
00:02:56.764  [172/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output)
00:02:56.764  [173/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o
00:02:56.764  [174/265] Linking static target lib/librte_cryptodev.a
00:02:57.022  [175/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o
00:02:57.281  [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:57.281  [177/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o
00:02:57.281  [178/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o
00:02:57.281  [179/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o
00:02:57.281  [180/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o
00:02:57.281  [181/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o
00:02:57.848  [182/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o
00:02:57.848  [183/265] Linking static target lib/librte_reorder.a
00:02:57.848  [184/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o
00:02:57.848  [185/265] Linking static target lib/librte_power.a
00:02:57.848  [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o
00:02:58.107  [187/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o
00:02:58.107  [188/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o
00:02:58.364  [189/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output)
00:02:58.364  [190/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o
00:02:58.364  [191/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o
00:02:58.364  [192/265] Linking static target lib/librte_security.a
00:02:58.930  [193/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o
00:02:58.930  [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o
00:02:58.930  [195/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o
00:02:58.930  [196/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output)
00:02:59.188  [197/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o
00:02:59.188  [198/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output)
00:02:59.188  [199/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:59.755  [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o
00:02:59.755  [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o
00:02:59.755  [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o
00:02:59.755  [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o
00:02:59.755  [204/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o
00:03:00.014  [205/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o
00:03:00.014  [206/265] Linking static target drivers/libtmp_rte_bus_vdev.a
00:03:00.014  [207/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o
00:03:00.014  [208/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o
00:03:00.014  [209/265] Linking static target drivers/libtmp_rte_bus_pci.a
00:03:00.289  [210/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command
00:03:00.289  [211/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:03:00.289  [212/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:03:00.289  [213/265] Linking static target drivers/librte_bus_vdev.a
00:03:00.289  [214/265] Generating drivers/rte_bus_pci.pmd.c with a custom command
00:03:00.289  [215/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:03:00.289  [216/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:03:00.289  [217/265] Linking static target drivers/librte_bus_pci.a
00:03:00.289  [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o
00:03:00.289  [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a
00:03:00.548  [220/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:00.548  [221/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command
00:03:00.548  [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:03:00.548  [223/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:03:00.548  [224/265] Linking static target drivers/librte_mempool_ring.a
00:03:00.807  [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output)
00:03:01.065  [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o
00:03:01.065  [227/265] Linking static target lib/librte_vhost.a
00:03:02.001  [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output)
00:03:02.001  [229/265] Linking target lib/librte_eal.so.24.0
00:03:02.260  [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols
00:03:02.260  [231/265] Linking target lib/librte_ring.so.24.0
00:03:02.260  [232/265] Linking target lib/librte_meter.so.24.0
00:03:02.260  [233/265] Linking target lib/librte_timer.so.24.0
00:03:02.260  [234/265] Linking target drivers/librte_bus_vdev.so.24.0
00:03:02.260  [235/265] Linking target lib/librte_pci.so.24.0
00:03:02.260  [236/265] Linking target lib/librte_dmadev.so.24.0
00:03:02.260  [237/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:02.260  [238/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols
00:03:02.260  [239/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols
00:03:02.260  [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols
00:03:02.260  [241/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols
00:03:02.260  [242/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols
00:03:02.518  [243/265] Linking target drivers/librte_bus_pci.so.24.0
00:03:02.518  [244/265] Linking target lib/librte_rcu.so.24.0
00:03:02.518  [245/265] Linking target lib/librte_mempool.so.24.0
00:03:02.518  [246/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols
00:03:02.518  [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols
00:03:02.518  [248/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output)
00:03:02.518  [249/265] Linking target lib/librte_mbuf.so.24.0
00:03:02.518  [250/265] Linking target drivers/librte_mempool_ring.so.24.0
00:03:02.777  [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols
00:03:02.777  [252/265] Linking target lib/librte_compressdev.so.24.0
00:03:02.777  [253/265] Linking target lib/librte_reorder.so.24.0
00:03:02.777  [254/265] Linking target lib/librte_net.so.24.0
00:03:02.777  [255/265] Linking target lib/librte_cryptodev.so.24.0
00:03:03.036  [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols
00:03:03.036  [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols
00:03:03.036  [258/265] Linking target lib/librte_hash.so.24.0
00:03:03.036  [259/265] Linking target lib/librte_cmdline.so.24.0
00:03:03.036  [260/265] Linking target lib/librte_security.so.24.0
00:03:03.036  [261/265] Linking target lib/librte_ethdev.so.24.0
00:03:03.036  [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols
00:03:03.295  [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols
00:03:03.295  [264/265] Linking target lib/librte_power.so.24.0
00:03:03.295  [265/265] Linking target lib/librte_vhost.so.24.0
00:03:03.295  INFO: autodetecting backend as ninja
00:03:03.295  INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10
00:03:04.671    CC lib/log/log.o
00:03:04.671    CC lib/log/log_flags.o
00:03:04.671    CC lib/log/log_deprecated.o
00:03:04.671    CC lib/ut/ut.o
00:03:04.671    CC lib/ut_mock/mock.o
00:03:04.671    LIB libspdk_ut_mock.a
00:03:04.671    LIB libspdk_log.a
00:03:04.671    LIB libspdk_ut.a
00:03:04.671    SO libspdk_ut_mock.so.5.0
00:03:04.671    SO libspdk_log.so.6.1
00:03:04.671    SO libspdk_ut.so.1.0
00:03:04.671    SYMLINK libspdk_ut_mock.so
00:03:04.671    SYMLINK libspdk_ut.so
00:03:04.671    SYMLINK libspdk_log.so
00:03:04.929    CC lib/util/base64.o
00:03:04.929    CC lib/util/bit_array.o
00:03:04.929    CC lib/util/cpuset.o
00:03:04.929    CC lib/util/crc16.o
00:03:04.929    CC lib/util/crc32.o
00:03:04.929    CC lib/util/crc32c.o
00:03:04.929    CC lib/ioat/ioat.o
00:03:04.929    CXX lib/trace_parser/trace.o
00:03:04.929    CC lib/dma/dma.o
00:03:04.929    CC lib/vfio_user/host/vfio_user_pci.o
00:03:04.929    CC lib/util/crc32_ieee.o
00:03:04.929    CC lib/util/crc64.o
00:03:04.929    CC lib/vfio_user/host/vfio_user.o
00:03:04.929    CC lib/util/dif.o
00:03:05.187    CC lib/util/fd.o
00:03:05.187    LIB libspdk_dma.a
00:03:05.187    CC lib/util/file.o
00:03:05.187    SO libspdk_dma.so.3.0
00:03:05.187    CC lib/util/hexlify.o
00:03:05.187    CC lib/util/iov.o
00:03:05.187    SYMLINK libspdk_dma.so
00:03:05.187    CC lib/util/math.o
00:03:05.187    LIB libspdk_ioat.a
00:03:05.187    SO libspdk_ioat.so.6.0
00:03:05.187    LIB libspdk_vfio_user.a
00:03:05.187    CC lib/util/pipe.o
00:03:05.187    CC lib/util/strerror_tls.o
00:03:05.187    CC lib/util/string.o
00:03:05.187    SYMLINK libspdk_ioat.so
00:03:05.187    SO libspdk_vfio_user.so.4.0
00:03:05.187    CC lib/util/uuid.o
00:03:05.187    CC lib/util/fd_group.o
00:03:05.187    SYMLINK libspdk_vfio_user.so
00:03:05.187    CC lib/util/xor.o
00:03:05.445    CC lib/util/zipf.o
00:03:05.445    LIB libspdk_util.a
00:03:05.704    SO libspdk_util.so.8.0
00:03:05.704    SYMLINK libspdk_util.so
00:03:05.962    LIB libspdk_trace_parser.a
00:03:05.962    CC lib/json/json_parse.o
00:03:05.962    CC lib/vmd/vmd.o
00:03:05.962    CC lib/env_dpdk/env.o
00:03:05.962    CC lib/env_dpdk/memory.o
00:03:05.962    CC lib/json/json_util.o
00:03:05.962    CC lib/vmd/led.o
00:03:05.962    CC lib/rdma/common.o
00:03:05.962    CC lib/conf/conf.o
00:03:05.962    CC lib/idxd/idxd.o
00:03:05.962    SO libspdk_trace_parser.so.4.0
00:03:05.962    SYMLINK libspdk_trace_parser.so
00:03:05.962    CC lib/idxd/idxd_user.o
00:03:05.962    CC lib/idxd/idxd_kernel.o
00:03:06.221    LIB libspdk_conf.a
00:03:06.221    CC lib/env_dpdk/pci.o
00:03:06.221    CC lib/json/json_write.o
00:03:06.221    SO libspdk_conf.so.5.0
00:03:06.221    CC lib/rdma/rdma_verbs.o
00:03:06.221    SYMLINK libspdk_conf.so
00:03:06.221    CC lib/env_dpdk/init.o
00:03:06.221    CC lib/env_dpdk/threads.o
00:03:06.221    CC lib/env_dpdk/pci_ioat.o
00:03:06.221    CC lib/env_dpdk/pci_virtio.o
00:03:06.480    LIB libspdk_rdma.a
00:03:06.480    CC lib/env_dpdk/pci_vmd.o
00:03:06.480    SO libspdk_rdma.so.5.0
00:03:06.480    LIB libspdk_json.a
00:03:06.480    LIB libspdk_idxd.a
00:03:06.480    SO libspdk_json.so.5.1
00:03:06.480    SO libspdk_idxd.so.11.0
00:03:06.480    SYMLINK libspdk_rdma.so
00:03:06.480    CC lib/env_dpdk/pci_idxd.o
00:03:06.480    CC lib/env_dpdk/pci_event.o
00:03:06.480    CC lib/env_dpdk/sigbus_handler.o
00:03:06.480    SYMLINK libspdk_json.so
00:03:06.480    SYMLINK libspdk_idxd.so
00:03:06.480    CC lib/env_dpdk/pci_dpdk.o
00:03:06.480    CC lib/env_dpdk/pci_dpdk_2207.o
00:03:06.480    CC lib/env_dpdk/pci_dpdk_2211.o
00:03:06.738    CC lib/jsonrpc/jsonrpc_server.o
00:03:06.738    CC lib/jsonrpc/jsonrpc_server_tcp.o
00:03:06.738    LIB libspdk_vmd.a
00:03:06.738    CC lib/jsonrpc/jsonrpc_client.o
00:03:06.738    CC lib/jsonrpc/jsonrpc_client_tcp.o
00:03:06.738    SO libspdk_vmd.so.5.0
00:03:06.738    SYMLINK libspdk_vmd.so
00:03:06.996    LIB libspdk_jsonrpc.a
00:03:06.996    SO libspdk_jsonrpc.so.5.1
00:03:06.996    SYMLINK libspdk_jsonrpc.so
00:03:07.255    CC lib/rpc/rpc.o
00:03:07.255    LIB libspdk_env_dpdk.a
00:03:07.255    SO libspdk_env_dpdk.so.13.0
00:03:07.255    LIB libspdk_rpc.a
00:03:07.514    SO libspdk_rpc.so.5.0
00:03:07.514    SYMLINK libspdk_env_dpdk.so
00:03:07.514    SYMLINK libspdk_rpc.so
00:03:07.514    CC lib/sock/sock.o
00:03:07.514    CC lib/sock/sock_rpc.o
00:03:07.514    CC lib/notify/notify_rpc.o
00:03:07.514    CC lib/notify/notify.o
00:03:07.514    CC lib/trace/trace.o
00:03:07.514    CC lib/trace/trace_flags.o
00:03:07.514    CC lib/trace/trace_rpc.o
00:03:07.773    LIB libspdk_notify.a
00:03:07.773    SO libspdk_notify.so.5.0
00:03:07.773    LIB libspdk_trace.a
00:03:07.773    SYMLINK libspdk_notify.so
00:03:07.773    SO libspdk_trace.so.9.0
00:03:08.031    SYMLINK libspdk_trace.so
00:03:08.031    LIB libspdk_sock.a
00:03:08.031    SO libspdk_sock.so.8.0
00:03:08.031    CC lib/thread/iobuf.o
00:03:08.031    CC lib/thread/thread.o
00:03:08.031    SYMLINK libspdk_sock.so
00:03:08.288    CC lib/nvme/nvme_ctrlr_cmd.o
00:03:08.288    CC lib/nvme/nvme_ctrlr.o
00:03:08.288    CC lib/nvme/nvme_fabric.o
00:03:08.288    CC lib/nvme/nvme_ns_cmd.o
00:03:08.288    CC lib/nvme/nvme_ns.o
00:03:08.288    CC lib/nvme/nvme_pcie_common.o
00:03:08.288    CC lib/nvme/nvme_pcie.o
00:03:08.289    CC lib/nvme/nvme_qpair.o
00:03:08.289    CC lib/nvme/nvme.o
00:03:09.222    CC lib/nvme/nvme_quirks.o
00:03:09.222    CC lib/nvme/nvme_transport.o
00:03:09.222    CC lib/nvme/nvme_discovery.o
00:03:09.222    CC lib/nvme/nvme_ctrlr_ocssd_cmd.o
00:03:09.222    CC lib/nvme/nvme_ns_ocssd_cmd.o
00:03:09.222    CC lib/nvme/nvme_tcp.o
00:03:09.481    CC lib/nvme/nvme_opal.o
00:03:09.481    CC lib/nvme/nvme_io_msg.o
00:03:09.740    LIB libspdk_thread.a
00:03:09.740    CC lib/nvme/nvme_poll_group.o
00:03:09.740    SO libspdk_thread.so.9.0
00:03:09.740    CC lib/nvme/nvme_zns.o
00:03:09.740    CC lib/nvme/nvme_cuse.o
00:03:09.740    SYMLINK libspdk_thread.so
00:03:09.740    CC lib/nvme/nvme_vfio_user.o
00:03:10.013    CC lib/nvme/nvme_rdma.o
00:03:10.013    CC lib/accel/accel.o
00:03:10.013    CC lib/blob/blobstore.o
00:03:10.013    CC lib/blob/request.o
00:03:10.294    CC lib/blob/zeroes.o
00:03:10.294    CC lib/blob/blob_bs_dev.o
00:03:10.294    CC lib/accel/accel_rpc.o
00:03:10.294    CC lib/accel/accel_sw.o
00:03:10.553    CC lib/init/json_config.o
00:03:10.553    CC lib/init/subsystem.o
00:03:10.553    CC lib/init/subsystem_rpc.o
00:03:10.553    CC lib/virtio/virtio.o
00:03:10.553    CC lib/vfu_tgt/tgt_endpoint.o
00:03:10.553    CC lib/init/rpc.o
00:03:10.553    CC lib/vfu_tgt/tgt_rpc.o
00:03:10.811    CC lib/virtio/virtio_vhost_user.o
00:03:10.811    CC lib/virtio/virtio_vfio_user.o
00:03:10.811    CC lib/virtio/virtio_pci.o
00:03:10.811    LIB libspdk_init.a
00:03:10.811    SO libspdk_init.so.4.0
00:03:10.811    SYMLINK libspdk_init.so
00:03:10.811    LIB libspdk_vfu_tgt.a
00:03:11.070    LIB libspdk_accel.a
00:03:11.070    SO libspdk_vfu_tgt.so.2.0
00:03:11.070    LIB libspdk_virtio.a
00:03:11.070    SO libspdk_accel.so.14.0
00:03:11.070    CC lib/event/app.o
00:03:11.070    CC lib/event/reactor.o
00:03:11.070    CC lib/event/scheduler_static.o
00:03:11.070    CC lib/event/log_rpc.o
00:03:11.070    CC lib/event/app_rpc.o
00:03:11.070    SO libspdk_virtio.so.6.0
00:03:11.070    SYMLINK libspdk_vfu_tgt.so
00:03:11.070    SYMLINK libspdk_accel.so
00:03:11.070    SYMLINK libspdk_virtio.so
00:03:11.070    LIB libspdk_nvme.a
00:03:11.070    CC lib/bdev/bdev.o
00:03:11.070    CC lib/bdev/bdev_rpc.o
00:03:11.070    CC lib/bdev/part.o
00:03:11.070    CC lib/bdev/scsi_nvme.o
00:03:11.070    CC lib/bdev/bdev_zone.o
00:03:11.329    SO libspdk_nvme.so.12.0
00:03:11.329    LIB libspdk_event.a
00:03:11.587    SO libspdk_event.so.12.0
00:03:11.587    SYMLINK libspdk_event.so
00:03:11.587    SYMLINK libspdk_nvme.so
00:03:12.962    LIB libspdk_blob.a
00:03:12.962    SO libspdk_blob.so.10.1
00:03:12.962    SYMLINK libspdk_blob.so
00:03:13.220    CC lib/blobfs/blobfs.o
00:03:13.220    CC lib/blobfs/tree.o
00:03:13.220    CC lib/lvol/lvol.o
00:03:13.478    LIB libspdk_bdev.a
00:03:13.478    SO libspdk_bdev.so.14.0
00:03:13.478    SYMLINK libspdk_bdev.so
00:03:13.736    CC lib/ublk/ublk.o
00:03:13.736    CC lib/ublk/ublk_rpc.o
00:03:13.736    CC lib/nbd/nbd.o
00:03:13.736    CC lib/nbd/nbd_rpc.o
00:03:13.736    CC lib/scsi/dev.o
00:03:13.736    CC lib/scsi/lun.o
00:03:13.736    CC lib/nvmf/ctrlr.o
00:03:13.736    CC lib/ftl/ftl_core.o
00:03:13.994    LIB libspdk_blobfs.a
00:03:13.994    SO libspdk_blobfs.so.9.0
00:03:13.994    CC lib/ftl/ftl_init.o
00:03:13.994    CC lib/ftl/ftl_layout.o
00:03:13.994    SYMLINK libspdk_blobfs.so
00:03:13.994    CC lib/ftl/ftl_debug.o
00:03:13.994    CC lib/ftl/ftl_io.o
00:03:13.994    LIB libspdk_lvol.a
00:03:13.994    SO libspdk_lvol.so.9.1
00:03:13.994    CC lib/scsi/port.o
00:03:14.253    CC lib/scsi/scsi.o
00:03:14.253    LIB libspdk_nbd.a
00:03:14.253    SYMLINK libspdk_lvol.so
00:03:14.253    SO libspdk_nbd.so.6.0
00:03:14.253    CC lib/scsi/scsi_bdev.o
00:03:14.253    CC lib/scsi/scsi_pr.o
00:03:14.253    CC lib/scsi/scsi_rpc.o
00:03:14.253    SYMLINK libspdk_nbd.so
00:03:14.253    CC lib/scsi/task.o
00:03:14.253    CC lib/nvmf/ctrlr_discovery.o
00:03:14.253    CC lib/nvmf/ctrlr_bdev.o
00:03:14.253    CC lib/ftl/ftl_sb.o
00:03:14.253    CC lib/ftl/ftl_l2p.o
00:03:14.253    CC lib/ftl/ftl_l2p_flat.o
00:03:14.511    LIB libspdk_ublk.a
00:03:14.511    SO libspdk_ublk.so.2.0
00:03:14.511    CC lib/ftl/ftl_nv_cache.o
00:03:14.511    SYMLINK libspdk_ublk.so
00:03:14.511    CC lib/ftl/ftl_band.o
00:03:14.511    CC lib/ftl/ftl_band_ops.o
00:03:14.511    CC lib/ftl/ftl_writer.o
00:03:14.511    CC lib/ftl/ftl_rq.o
00:03:14.511    CC lib/ftl/ftl_reloc.o
00:03:14.769    LIB libspdk_scsi.a
00:03:14.769    CC lib/ftl/ftl_l2p_cache.o
00:03:14.769    SO libspdk_scsi.so.8.0
00:03:14.769    CC lib/nvmf/subsystem.o
00:03:14.769    CC lib/ftl/ftl_p2l.o
00:03:14.769    SYMLINK libspdk_scsi.so
00:03:14.769    CC lib/ftl/mngt/ftl_mngt.o
00:03:14.769    CC lib/ftl/mngt/ftl_mngt_bdev.o
00:03:15.027    CC lib/nvmf/nvmf.o
00:03:15.027    CC lib/iscsi/conn.o
00:03:15.027    CC lib/iscsi/init_grp.o
00:03:15.027    CC lib/iscsi/iscsi.o
00:03:15.027    CC lib/ftl/mngt/ftl_mngt_shutdown.o
00:03:15.027    CC lib/nvmf/nvmf_rpc.o
00:03:15.285    CC lib/nvmf/transport.o
00:03:15.285    CC lib/nvmf/tcp.o
00:03:15.285    CC lib/iscsi/md5.o
00:03:15.543    CC lib/iscsi/param.o
00:03:15.543    CC lib/ftl/mngt/ftl_mngt_startup.o
00:03:15.543    CC lib/iscsi/portal_grp.o
00:03:15.543    CC lib/nvmf/vfio_user.o
00:03:15.543    CC lib/ftl/mngt/ftl_mngt_md.o
00:03:15.801    CC lib/iscsi/tgt_node.o
00:03:15.801    CC lib/iscsi/iscsi_subsystem.o
00:03:15.801    CC lib/nvmf/rdma.o
00:03:15.801    CC lib/iscsi/iscsi_rpc.o
00:03:15.801    CC lib/iscsi/task.o
00:03:16.060    CC lib/ftl/mngt/ftl_mngt_misc.o
00:03:16.060    CC lib/vhost/vhost.o
00:03:16.060    CC lib/ftl/mngt/ftl_mngt_ioch.o
00:03:16.060    CC lib/ftl/mngt/ftl_mngt_l2p.o
00:03:16.060    CC lib/vhost/vhost_rpc.o
00:03:16.318    CC lib/vhost/vhost_scsi.o
00:03:16.318    CC lib/vhost/vhost_blk.o
00:03:16.318    CC lib/ftl/mngt/ftl_mngt_band.o
00:03:16.318    CC lib/ftl/mngt/ftl_mngt_self_test.o
00:03:16.576    LIB libspdk_iscsi.a
00:03:16.576    SO libspdk_iscsi.so.7.0
00:03:16.576    CC lib/ftl/mngt/ftl_mngt_p2l.o
00:03:16.576    CC lib/vhost/rte_vhost_user.o
00:03:16.835    SYMLINK libspdk_iscsi.so
00:03:16.835    CC lib/ftl/mngt/ftl_mngt_recovery.o
00:03:16.835    CC lib/ftl/mngt/ftl_mngt_upgrade.o
00:03:16.835    CC lib/ftl/utils/ftl_conf.o
00:03:16.835    CC lib/ftl/utils/ftl_md.o
00:03:16.835    CC lib/ftl/utils/ftl_mempool.o
00:03:16.835    CC lib/ftl/utils/ftl_bitmap.o
00:03:17.093    CC lib/ftl/utils/ftl_property.o
00:03:17.093    CC lib/ftl/utils/ftl_layout_tracker_bdev.o
00:03:17.093    CC lib/ftl/upgrade/ftl_layout_upgrade.o
00:03:17.093    CC lib/ftl/upgrade/ftl_sb_upgrade.o
00:03:17.093    CC lib/ftl/upgrade/ftl_p2l_upgrade.o
00:03:17.093    CC lib/ftl/upgrade/ftl_band_upgrade.o
00:03:17.093    CC lib/ftl/upgrade/ftl_chunk_upgrade.o
00:03:17.351    CC lib/ftl/upgrade/ftl_sb_v3.o
00:03:17.351    CC lib/ftl/upgrade/ftl_sb_v5.o
00:03:17.351    CC lib/ftl/nvc/ftl_nvc_dev.o
00:03:17.351    CC lib/ftl/nvc/ftl_nvc_bdev_vss.o
00:03:17.351    CC lib/ftl/base/ftl_base_dev.o
00:03:17.351    CC lib/ftl/base/ftl_base_bdev.o
00:03:17.351    CC lib/ftl/ftl_trace.o
00:03:17.609    LIB libspdk_ftl.a
00:03:17.609    LIB libspdk_vhost.a
00:03:17.868    SO libspdk_vhost.so.7.1
00:03:17.868    SO libspdk_ftl.so.8.0
00:03:17.868    SYMLINK libspdk_vhost.so
00:03:17.868    LIB libspdk_nvmf.a
00:03:18.126    SO libspdk_nvmf.so.17.0
00:03:18.126    SYMLINK libspdk_ftl.so
00:03:18.126    SYMLINK libspdk_nvmf.so
00:03:18.383    CC module/env_dpdk/env_dpdk_rpc.o
00:03:18.383    CC module/vfu_device/vfu_virtio.o
00:03:18.641    CC module/accel/iaa/accel_iaa.o
00:03:18.641    CC module/accel/ioat/accel_ioat.o
00:03:18.641    CC module/blob/bdev/blob_bdev.o
00:03:18.641    CC module/sock/posix/posix.o
00:03:18.641    CC module/scheduler/dynamic/scheduler_dynamic.o
00:03:18.641    CC module/accel/error/accel_error.o
00:03:18.641    CC module/accel/dsa/accel_dsa.o
00:03:18.641    CC module/scheduler/dpdk_governor/dpdk_governor.o
00:03:18.641    LIB libspdk_env_dpdk_rpc.a
00:03:18.641    SO libspdk_env_dpdk_rpc.so.5.0
00:03:18.641    SYMLINK libspdk_env_dpdk_rpc.so
00:03:18.641    CC module/vfu_device/vfu_virtio_blk.o
00:03:18.641    LIB libspdk_scheduler_dpdk_governor.a
00:03:18.641    CC module/accel/error/accel_error_rpc.o
00:03:18.641    CC module/accel/ioat/accel_ioat_rpc.o
00:03:18.641    SO libspdk_scheduler_dpdk_governor.so.3.0
00:03:18.641    LIB libspdk_scheduler_dynamic.a
00:03:18.641    CC module/accel/iaa/accel_iaa_rpc.o
00:03:18.641    SO libspdk_scheduler_dynamic.so.3.0
00:03:18.641    CC module/accel/dsa/accel_dsa_rpc.o
00:03:18.899    SYMLINK libspdk_scheduler_dpdk_governor.so
00:03:18.899    LIB libspdk_blob_bdev.a
00:03:18.899    SYMLINK libspdk_scheduler_dynamic.so
00:03:18.899    SO libspdk_blob_bdev.so.10.1
00:03:18.899    CC module/vfu_device/vfu_virtio_scsi.o
00:03:18.899    LIB libspdk_accel_ioat.a
00:03:18.899    SYMLINK libspdk_blob_bdev.so
00:03:18.899    LIB libspdk_accel_error.a
00:03:18.899    CC module/vfu_device/vfu_virtio_rpc.o
00:03:18.899    LIB libspdk_accel_iaa.a
00:03:18.899    SO libspdk_accel_ioat.so.5.0
00:03:18.899    SO libspdk_accel_error.so.1.0
00:03:18.899    SO libspdk_accel_iaa.so.2.0
00:03:18.899    CC module/scheduler/gscheduler/gscheduler.o
00:03:18.899    LIB libspdk_accel_dsa.a
00:03:18.899    SYMLINK libspdk_accel_error.so
00:03:18.899    SYMLINK libspdk_accel_ioat.so
00:03:18.899    SO libspdk_accel_dsa.so.4.0
00:03:18.899    SYMLINK libspdk_accel_iaa.so
00:03:19.158    SYMLINK libspdk_accel_dsa.so
00:03:19.158    LIB libspdk_scheduler_gscheduler.a
00:03:19.158    SO libspdk_scheduler_gscheduler.so.3.0
00:03:19.158    CC module/bdev/error/vbdev_error.o
00:03:19.158    CC module/blobfs/bdev/blobfs_bdev.o
00:03:19.158    CC module/bdev/delay/vbdev_delay.o
00:03:19.158    CC module/bdev/malloc/bdev_malloc.o
00:03:19.158    CC module/bdev/gpt/gpt.o
00:03:19.158    CC module/bdev/lvol/vbdev_lvol.o
00:03:19.158    LIB libspdk_vfu_device.a
00:03:19.158    SYMLINK libspdk_scheduler_gscheduler.so
00:03:19.158    CC module/bdev/null/bdev_null.o
00:03:19.158    SO libspdk_vfu_device.so.2.0
00:03:19.158    LIB libspdk_sock_posix.a
00:03:19.416    SO libspdk_sock_posix.so.5.0
00:03:19.416    CC module/bdev/nvme/bdev_nvme.o
00:03:19.416    SYMLINK libspdk_vfu_device.so
00:03:19.416    CC module/bdev/nvme/bdev_nvme_rpc.o
00:03:19.416    CC module/blobfs/bdev/blobfs_bdev_rpc.o
00:03:19.416    CC module/bdev/gpt/vbdev_gpt.o
00:03:19.416    SYMLINK libspdk_sock_posix.so
00:03:19.416    CC module/bdev/malloc/bdev_malloc_rpc.o
00:03:19.416    CC module/bdev/error/vbdev_error_rpc.o
00:03:19.416    CC module/bdev/null/bdev_null_rpc.o
00:03:19.416    LIB libspdk_blobfs_bdev.a
00:03:19.416    CC module/bdev/delay/vbdev_delay_rpc.o
00:03:19.416    CC module/bdev/lvol/vbdev_lvol_rpc.o
00:03:19.416    SO libspdk_blobfs_bdev.so.5.0
00:03:19.676    LIB libspdk_bdev_malloc.a
00:03:19.676    SO libspdk_bdev_malloc.so.5.0
00:03:19.676    LIB libspdk_bdev_error.a
00:03:19.676    SYMLINK libspdk_blobfs_bdev.so
00:03:19.676    SO libspdk_bdev_error.so.5.0
00:03:19.676    LIB libspdk_bdev_gpt.a
00:03:19.676    SYMLINK libspdk_bdev_malloc.so
00:03:19.676    LIB libspdk_bdev_null.a
00:03:19.676    SO libspdk_bdev_gpt.so.5.0
00:03:19.676    SO libspdk_bdev_null.so.5.0
00:03:19.676    SYMLINK libspdk_bdev_error.so
00:03:19.676    LIB libspdk_bdev_delay.a
00:03:19.676    SYMLINK libspdk_bdev_gpt.so
00:03:19.676    SYMLINK libspdk_bdev_null.so
00:03:19.676    SO libspdk_bdev_delay.so.5.0
00:03:19.676    CC module/bdev/nvme/nvme_rpc.o
00:03:19.676    CC module/bdev/passthru/vbdev_passthru.o
00:03:19.676    CC module/bdev/passthru/vbdev_passthru_rpc.o
00:03:19.676    CC module/bdev/raid/bdev_raid.o
00:03:19.676    CC module/bdev/split/vbdev_split.o
00:03:19.677    SYMLINK libspdk_bdev_delay.so
00:03:19.677    CC module/bdev/split/vbdev_split_rpc.o
00:03:19.965    CC module/bdev/zone_block/vbdev_zone_block.o
00:03:19.965    LIB libspdk_bdev_lvol.a
00:03:19.965    SO libspdk_bdev_lvol.so.5.0
00:03:19.965    CC module/bdev/zone_block/vbdev_zone_block_rpc.o
00:03:19.965    CC module/bdev/raid/bdev_raid_rpc.o
00:03:19.965    SYMLINK libspdk_bdev_lvol.so
00:03:19.965    CC module/bdev/nvme/bdev_mdns_client.o
00:03:19.965    CC module/bdev/nvme/vbdev_opal.o
00:03:19.965    LIB libspdk_bdev_split.a
00:03:19.965    SO libspdk_bdev_split.so.5.0
00:03:19.965    LIB libspdk_bdev_passthru.a
00:03:20.248    CC module/bdev/raid/bdev_raid_sb.o
00:03:20.248    SO libspdk_bdev_passthru.so.5.0
00:03:20.248    CC module/bdev/aio/bdev_aio.o
00:03:20.248    SYMLINK libspdk_bdev_split.so
00:03:20.248    CC module/bdev/raid/raid0.o
00:03:20.248    SYMLINK libspdk_bdev_passthru.so
00:03:20.248    CC module/bdev/raid/raid1.o
00:03:20.248    LIB libspdk_bdev_zone_block.a
00:03:20.248    SO libspdk_bdev_zone_block.so.5.0
00:03:20.248    CC module/bdev/ftl/bdev_ftl.o
00:03:20.248    CC module/bdev/raid/concat.o
00:03:20.248    CC module/bdev/ftl/bdev_ftl_rpc.o
00:03:20.248    SYMLINK libspdk_bdev_zone_block.so
00:03:20.248    CC module/bdev/aio/bdev_aio_rpc.o
00:03:20.248    CC module/bdev/nvme/vbdev_opal_rpc.o
00:03:20.506    CC module/bdev/nvme/bdev_nvme_cuse_rpc.o
00:03:20.506    LIB libspdk_bdev_aio.a
00:03:20.506    SO libspdk_bdev_aio.so.5.0
00:03:20.506    LIB libspdk_bdev_ftl.a
00:03:20.506    CC module/bdev/iscsi/bdev_iscsi_rpc.o
00:03:20.506    CC module/bdev/iscsi/bdev_iscsi.o
00:03:20.506    SYMLINK libspdk_bdev_aio.so
00:03:20.506    CC module/bdev/virtio/bdev_virtio_scsi.o
00:03:20.506    CC module/bdev/virtio/bdev_virtio_rpc.o
00:03:20.506    SO libspdk_bdev_ftl.so.5.0
00:03:20.506    CC module/bdev/virtio/bdev_virtio_blk.o
00:03:20.506    SYMLINK libspdk_bdev_ftl.so
00:03:20.764    LIB libspdk_bdev_raid.a
00:03:20.764    SO libspdk_bdev_raid.so.5.0
00:03:20.764    LIB libspdk_bdev_iscsi.a
00:03:20.764    SYMLINK libspdk_bdev_raid.so
00:03:21.023    SO libspdk_bdev_iscsi.so.5.0
00:03:21.023    SYMLINK libspdk_bdev_iscsi.so
00:03:21.023    LIB libspdk_bdev_virtio.a
00:03:21.023    SO libspdk_bdev_virtio.so.5.0
00:03:21.281    SYMLINK libspdk_bdev_virtio.so
00:03:21.539    LIB libspdk_bdev_nvme.a
00:03:21.539    SO libspdk_bdev_nvme.so.6.0
00:03:21.539    SYMLINK libspdk_bdev_nvme.so
00:03:21.798    CC module/event/subsystems/vmd/vmd.o
00:03:21.798    CC module/event/subsystems/vmd/vmd_rpc.o
00:03:21.798    CC module/event/subsystems/iobuf/iobuf.o
00:03:21.798    CC module/event/subsystems/iobuf/iobuf_rpc.o
00:03:21.798    CC module/event/subsystems/vfu_tgt/vfu_tgt.o
00:03:21.798    CC module/event/subsystems/sock/sock.o
00:03:21.798    CC module/event/subsystems/scheduler/scheduler.o
00:03:22.056    CC module/event/subsystems/vhost_blk/vhost_blk.o
00:03:22.056    LIB libspdk_event_sock.a
00:03:22.056    LIB libspdk_event_vfu_tgt.a
00:03:22.056    LIB libspdk_event_vhost_blk.a
00:03:22.056    SO libspdk_event_sock.so.4.0
00:03:22.056    SO libspdk_event_vfu_tgt.so.2.0
00:03:22.056    LIB libspdk_event_vmd.a
00:03:22.056    LIB libspdk_event_iobuf.a
00:03:22.056    SO libspdk_event_vhost_blk.so.2.0
00:03:22.056    LIB libspdk_event_scheduler.a
00:03:22.056    SO libspdk_event_vmd.so.5.0
00:03:22.056    SO libspdk_event_iobuf.so.2.0
00:03:22.056    SYMLINK libspdk_event_vfu_tgt.so
00:03:22.056    SO libspdk_event_scheduler.so.3.0
00:03:22.056    SYMLINK libspdk_event_vhost_blk.so
00:03:22.056    SYMLINK libspdk_event_sock.so
00:03:22.056    SYMLINK libspdk_event_vmd.so
00:03:22.314    SYMLINK libspdk_event_iobuf.so
00:03:22.314    SYMLINK libspdk_event_scheduler.so
00:03:22.314    CC module/event/subsystems/accel/accel.o
00:03:22.572    LIB libspdk_event_accel.a
00:03:22.572    SO libspdk_event_accel.so.5.0
00:03:22.572    SYMLINK libspdk_event_accel.so
00:03:22.831    CC module/event/subsystems/bdev/bdev.o
00:03:23.089    LIB libspdk_event_bdev.a
00:03:23.089    SO libspdk_event_bdev.so.5.0
00:03:23.089    SYMLINK libspdk_event_bdev.so
00:03:23.347    CC module/event/subsystems/nvmf/nvmf_rpc.o
00:03:23.347    CC module/event/subsystems/nvmf/nvmf_tgt.o
00:03:23.347    CC module/event/subsystems/ublk/ublk.o
00:03:23.347    CC module/event/subsystems/scsi/scsi.o
00:03:23.347    CC module/event/subsystems/nbd/nbd.o
00:03:23.347    LIB libspdk_event_nbd.a
00:03:23.347    LIB libspdk_event_ublk.a
00:03:23.605    LIB libspdk_event_scsi.a
00:03:23.605    SO libspdk_event_nbd.so.5.0
00:03:23.605    SO libspdk_event_ublk.so.2.0
00:03:23.605    SO libspdk_event_scsi.so.5.0
00:03:23.605    SYMLINK libspdk_event_nbd.so
00:03:23.605    SYMLINK libspdk_event_ublk.so
00:03:23.605    LIB libspdk_event_nvmf.a
00:03:23.605    SYMLINK libspdk_event_scsi.so
00:03:23.605    SO libspdk_event_nvmf.so.5.0
00:03:23.605    SYMLINK libspdk_event_nvmf.so
00:03:23.863    CC module/event/subsystems/iscsi/iscsi.o
00:03:23.863    CC module/event/subsystems/vhost_scsi/vhost_scsi.o
00:03:23.863    LIB libspdk_event_vhost_scsi.a
00:03:23.863    LIB libspdk_event_iscsi.a
00:03:23.863    SO libspdk_event_vhost_scsi.so.2.0
00:03:23.863    SO libspdk_event_iscsi.so.5.0
00:03:24.121    SYMLINK libspdk_event_vhost_scsi.so
00:03:24.121    SYMLINK libspdk_event_iscsi.so
00:03:24.121    SO libspdk.so.5.0
00:03:24.121    SYMLINK libspdk.so
00:03:24.379    CXX app/trace/trace.o
00:03:24.379    CC examples/ioat/perf/perf.o
00:03:24.379    CC examples/nvme/hello_world/hello_world.o
00:03:24.379    CC examples/sock/hello_world/hello_sock.o
00:03:24.379    CC examples/vmd/lsvmd/lsvmd.o
00:03:24.379    CC examples/accel/perf/accel_perf.o
00:03:24.379    CC examples/bdev/hello_world/hello_bdev.o
00:03:24.379    CC examples/blob/hello_world/hello_blob.o
00:03:24.379    CC examples/nvmf/nvmf/nvmf.o
00:03:24.379    CC test/accel/dif/dif.o
00:03:24.637    LINK lsvmd
00:03:24.637    LINK ioat_perf
00:03:24.637    LINK hello_world
00:03:24.637    LINK hello_sock
00:03:24.637    LINK hello_bdev
00:03:24.637    LINK hello_blob
00:03:24.637    LINK nvmf
00:03:24.895    LINK spdk_trace
00:03:24.895    CC examples/vmd/led/led.o
00:03:24.895    LINK accel_perf
00:03:24.895    LINK dif
00:03:24.895    CC examples/ioat/verify/verify.o
00:03:24.895    CC examples/nvme/reconnect/reconnect.o
00:03:24.895    CC examples/blob/cli/blobcli.o
00:03:24.895    LINK led
00:03:24.895    CC examples/nvme/nvme_manage/nvme_manage.o
00:03:24.895    CC examples/bdev/bdevperf/bdevperf.o
00:03:25.154    CC app/trace_record/trace_record.o
00:03:25.154    CC examples/util/zipf/zipf.o
00:03:25.154    LINK verify
00:03:25.154    CC examples/thread/thread/thread_ex.o
00:03:25.154    CC test/app/bdev_svc/bdev_svc.o
00:03:25.154    LINK zipf
00:03:25.154    LINK reconnect
00:03:25.154    CC test/bdev/bdevio/bdevio.o
00:03:25.412    LINK spdk_trace_record
00:03:25.412    CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o
00:03:25.412    LINK bdev_svc
00:03:25.412    LINK blobcli
00:03:25.412    CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o
00:03:25.412    LINK thread
00:03:25.412    CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o
00:03:25.412    LINK nvme_manage
00:03:25.670    CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o
00:03:25.670    CC app/nvmf_tgt/nvmf_main.o
00:03:25.670    LINK bdevio
00:03:25.670    CC app/iscsi_tgt/iscsi_tgt.o
00:03:25.670    CC app/spdk_lspci/spdk_lspci.o
00:03:25.670    CC examples/nvme/arbitration/arbitration.o
00:03:25.670    LINK bdevperf
00:03:25.670    LINK nvme_fuzz
00:03:25.928    CC app/spdk_tgt/spdk_tgt.o
00:03:25.928    LINK nvmf_tgt
00:03:25.928    LINK spdk_lspci
00:03:25.928    CC app/spdk_nvme_perf/perf.o
00:03:25.928    LINK iscsi_tgt
00:03:25.928    CC app/spdk_nvme_identify/identify.o
00:03:25.928    CC test/app/histogram_perf/histogram_perf.o
00:03:25.928    LINK spdk_tgt
00:03:25.928    LINK vhost_fuzz
00:03:26.186    LINK arbitration
00:03:26.186    CC examples/idxd/perf/perf.o
00:03:26.186    LINK histogram_perf
00:03:26.186    CC app/spdk_nvme_discover/discovery_aer.o
00:03:26.186    CC examples/interrupt_tgt/interrupt_tgt.o
00:03:26.186    CC examples/nvme/hotplug/hotplug.o
00:03:26.186    CC app/spdk_top/spdk_top.o
00:03:26.445    CC examples/nvme/cmb_copy/cmb_copy.o
00:03:26.445    LINK idxd_perf
00:03:26.445    LINK interrupt_tgt
00:03:26.445    LINK spdk_nvme_discover
00:03:26.445    LINK hotplug
00:03:26.704    LINK cmb_copy
00:03:26.704    CC examples/nvme/abort/abort.o
00:03:26.704    LINK spdk_nvme_perf
00:03:26.704    CC examples/nvme/pmr_persistence/pmr_persistence.o
00:03:26.704    CC test/app/jsoncat/jsoncat.o
00:03:26.704    CC app/vhost/vhost.o
00:03:26.704    CC test/app/stub/stub.o
00:03:26.704    LINK spdk_nvme_identify
00:03:26.963    LINK pmr_persistence
00:03:26.963    LINK jsoncat
00:03:26.963    CC app/spdk_dd/spdk_dd.o
00:03:26.963    LINK stub
00:03:26.963    LINK vhost
00:03:26.963    LINK abort
00:03:27.221    LINK iscsi_fuzz
00:03:27.221    CC app/fio/nvme/fio_plugin.o
00:03:27.221    CC app/fio/bdev/fio_plugin.o
00:03:27.221    LINK spdk_top
00:03:27.221    TEST_HEADER include/spdk/accel.h
00:03:27.221    TEST_HEADER include/spdk/accel_module.h
00:03:27.221    TEST_HEADER include/spdk/assert.h
00:03:27.221    TEST_HEADER include/spdk/barrier.h
00:03:27.221    TEST_HEADER include/spdk/base64.h
00:03:27.221    TEST_HEADER include/spdk/bdev.h
00:03:27.221    TEST_HEADER include/spdk/bdev_module.h
00:03:27.221    TEST_HEADER include/spdk/bdev_zone.h
00:03:27.221    TEST_HEADER include/spdk/bit_array.h
00:03:27.221    TEST_HEADER include/spdk/bit_pool.h
00:03:27.221    CC test/blobfs/mkfs/mkfs.o
00:03:27.221    TEST_HEADER include/spdk/blob_bdev.h
00:03:27.221    TEST_HEADER include/spdk/blobfs_bdev.h
00:03:27.221    TEST_HEADER include/spdk/blobfs.h
00:03:27.221    TEST_HEADER include/spdk/blob.h
00:03:27.221    TEST_HEADER include/spdk/conf.h
00:03:27.221    TEST_HEADER include/spdk/config.h
00:03:27.221    TEST_HEADER include/spdk/cpuset.h
00:03:27.221    TEST_HEADER include/spdk/crc16.h
00:03:27.221    TEST_HEADER include/spdk/crc32.h
00:03:27.221    TEST_HEADER include/spdk/crc64.h
00:03:27.221    TEST_HEADER include/spdk/dif.h
00:03:27.221    TEST_HEADER include/spdk/dma.h
00:03:27.221    TEST_HEADER include/spdk/endian.h
00:03:27.221    TEST_HEADER include/spdk/env_dpdk.h
00:03:27.221    TEST_HEADER include/spdk/env.h
00:03:27.221    TEST_HEADER include/spdk/event.h
00:03:27.221    TEST_HEADER include/spdk/fd_group.h
00:03:27.221    TEST_HEADER include/spdk/fd.h
00:03:27.480    TEST_HEADER include/spdk/file.h
00:03:27.480    TEST_HEADER include/spdk/ftl.h
00:03:27.480    TEST_HEADER include/spdk/gpt_spec.h
00:03:27.480    TEST_HEADER include/spdk/hexlify.h
00:03:27.480    TEST_HEADER include/spdk/histogram_data.h
00:03:27.480    TEST_HEADER include/spdk/idxd.h
00:03:27.480    TEST_HEADER include/spdk/idxd_spec.h
00:03:27.480    TEST_HEADER include/spdk/init.h
00:03:27.480    TEST_HEADER include/spdk/ioat.h
00:03:27.480    TEST_HEADER include/spdk/ioat_spec.h
00:03:27.480    TEST_HEADER include/spdk/iscsi_spec.h
00:03:27.480    TEST_HEADER include/spdk/json.h
00:03:27.480    TEST_HEADER include/spdk/jsonrpc.h
00:03:27.480    TEST_HEADER include/spdk/likely.h
00:03:27.480    TEST_HEADER include/spdk/log.h
00:03:27.480    TEST_HEADER include/spdk/lvol.h
00:03:27.480    TEST_HEADER include/spdk/memory.h
00:03:27.480    TEST_HEADER include/spdk/mmio.h
00:03:27.480    TEST_HEADER include/spdk/nbd.h
00:03:27.480    TEST_HEADER include/spdk/notify.h
00:03:27.480    TEST_HEADER include/spdk/nvme.h
00:03:27.480    TEST_HEADER include/spdk/nvme_intel.h
00:03:27.480    TEST_HEADER include/spdk/nvme_ocssd.h
00:03:27.480    TEST_HEADER include/spdk/nvme_ocssd_spec.h
00:03:27.480    TEST_HEADER include/spdk/nvme_spec.h
00:03:27.480    TEST_HEADER include/spdk/nvme_zns.h
00:03:27.480    TEST_HEADER include/spdk/nvmf_cmd.h
00:03:27.480    TEST_HEADER include/spdk/nvmf_fc_spec.h
00:03:27.480    TEST_HEADER include/spdk/nvmf.h
00:03:27.480    TEST_HEADER include/spdk/nvmf_spec.h
00:03:27.480    TEST_HEADER include/spdk/nvmf_transport.h
00:03:27.480    TEST_HEADER include/spdk/opal.h
00:03:27.480    TEST_HEADER include/spdk/opal_spec.h
00:03:27.480    TEST_HEADER include/spdk/pci_ids.h
00:03:27.480    TEST_HEADER include/spdk/pipe.h
00:03:27.480    TEST_HEADER include/spdk/queue.h
00:03:27.480    TEST_HEADER include/spdk/reduce.h
00:03:27.480    TEST_HEADER include/spdk/rpc.h
00:03:27.480    LINK spdk_dd
00:03:27.480    TEST_HEADER include/spdk/scheduler.h
00:03:27.480    TEST_HEADER include/spdk/scsi.h
00:03:27.480    TEST_HEADER include/spdk/scsi_spec.h
00:03:27.480    TEST_HEADER include/spdk/sock.h
00:03:27.480    TEST_HEADER include/spdk/stdinc.h
00:03:27.480    TEST_HEADER include/spdk/string.h
00:03:27.480    TEST_HEADER include/spdk/thread.h
00:03:27.480    TEST_HEADER include/spdk/trace.h
00:03:27.480    TEST_HEADER include/spdk/trace_parser.h
00:03:27.480    TEST_HEADER include/spdk/tree.h
00:03:27.480    TEST_HEADER include/spdk/ublk.h
00:03:27.480    TEST_HEADER include/spdk/util.h
00:03:27.480    CC test/dma/test_dma/test_dma.o
00:03:27.480    TEST_HEADER include/spdk/uuid.h
00:03:27.480    TEST_HEADER include/spdk/version.h
00:03:27.480    TEST_HEADER include/spdk/vfio_user_pci.h
00:03:27.480    TEST_HEADER include/spdk/vfio_user_spec.h
00:03:27.480    TEST_HEADER include/spdk/vhost.h
00:03:27.480    TEST_HEADER include/spdk/vmd.h
00:03:27.480    TEST_HEADER include/spdk/xor.h
00:03:27.480    TEST_HEADER include/spdk/zipf.h
00:03:27.480    CXX test/cpp_headers/accel.o
00:03:27.480    CC test/event/reactor/reactor.o
00:03:27.480    CC test/event/event_perf/event_perf.o
00:03:27.480    LINK mkfs
00:03:27.738    CC test/env/mem_callbacks/mem_callbacks.o
00:03:27.738    LINK reactor
00:03:27.738    CXX test/cpp_headers/accel_module.o
00:03:27.738    LINK event_perf
00:03:27.738    CC test/env/vtophys/vtophys.o
00:03:27.738    LINK spdk_bdev
00:03:27.738    LINK spdk_nvme
00:03:27.997    LINK test_dma
00:03:27.997    CXX test/cpp_headers/assert.o
00:03:27.997    CC test/lvol/esnap/esnap.o
00:03:27.997    LINK vtophys
00:03:27.997    CC test/event/reactor_perf/reactor_perf.o
00:03:27.997    CC test/rpc_client/rpc_client_test.o
00:03:27.997    CC test/nvme/aer/aer.o
00:03:27.997    CC test/thread/poller_perf/poller_perf.o
00:03:27.997    LINK reactor_perf
00:03:27.997    CXX test/cpp_headers/barrier.o
00:03:28.260    LINK rpc_client_test
00:03:28.260    CC test/nvme/reset/reset.o
00:03:28.260    CC test/event/app_repeat/app_repeat.o
00:03:28.260    LINK poller_perf
00:03:28.260    LINK aer
00:03:28.260    CXX test/cpp_headers/base64.o
00:03:28.260    LINK mem_callbacks
00:03:28.260    CXX test/cpp_headers/bdev.o
00:03:28.260    LINK app_repeat
00:03:28.260    CC test/event/scheduler/scheduler.o
00:03:28.260    CC test/env/env_dpdk_post_init/env_dpdk_post_init.o
00:03:28.520    LINK reset
00:03:28.520    CXX test/cpp_headers/bdev_module.o
00:03:28.520    CC test/env/memory/memory_ut.o
00:03:28.520    CXX test/cpp_headers/bdev_zone.o
00:03:28.520    LINK env_dpdk_post_init
00:03:28.520    CXX test/cpp_headers/bit_array.o
00:03:28.520    CC test/env/pci/pci_ut.o
00:03:28.520    LINK scheduler
00:03:28.520    CC test/nvme/sgl/sgl.o
00:03:28.778    CXX test/cpp_headers/bit_pool.o
00:03:28.778    CXX test/cpp_headers/blob_bdev.o
00:03:28.778    CXX test/cpp_headers/blobfs_bdev.o
00:03:28.778    CXX test/cpp_headers/blobfs.o
00:03:28.778    CXX test/cpp_headers/blob.o
00:03:28.778    LINK sgl
00:03:29.099    CXX test/cpp_headers/conf.o
00:03:29.099    CXX test/cpp_headers/config.o
00:03:29.099    CXX test/cpp_headers/cpuset.o
00:03:29.099    CC test/nvme/e2edp/nvme_dp.o
00:03:29.099    CXX test/cpp_headers/crc16.o
00:03:29.099    CXX test/cpp_headers/crc32.o
00:03:29.099    CXX test/cpp_headers/crc64.o
00:03:29.099    LINK pci_ut
00:03:29.383    CXX test/cpp_headers/dif.o
00:03:29.383    CXX test/cpp_headers/dma.o
00:03:29.383    CXX test/cpp_headers/endian.o
00:03:29.383    CXX test/cpp_headers/env_dpdk.o
00:03:29.383    CXX test/cpp_headers/env.o
00:03:29.383    LINK nvme_dp
00:03:29.383    CXX test/cpp_headers/event.o
00:03:29.383    CXX test/cpp_headers/fd_group.o
00:03:29.383    CXX test/cpp_headers/fd.o
00:03:29.383    CC test/nvme/overhead/overhead.o
00:03:29.383    CC test/nvme/err_injection/err_injection.o
00:03:29.384    CC test/nvme/startup/startup.o
00:03:29.642    CC test/nvme/reserve/reserve.o
00:03:29.642    LINK memory_ut
00:03:29.642    CXX test/cpp_headers/file.o
00:03:29.642    CC test/nvme/simple_copy/simple_copy.o
00:03:29.642    CC test/nvme/connect_stress/connect_stress.o
00:03:29.642    LINK startup
00:03:29.642    LINK err_injection
00:03:29.642    LINK overhead
00:03:29.642    LINK reserve
00:03:29.900    CXX test/cpp_headers/ftl.o
00:03:29.900    CXX test/cpp_headers/gpt_spec.o
00:03:29.900    LINK connect_stress
00:03:29.900    CC test/nvme/boot_partition/boot_partition.o
00:03:29.900    LINK simple_copy
00:03:29.900    CXX test/cpp_headers/hexlify.o
00:03:29.900    CC test/nvme/compliance/nvme_compliance.o
00:03:29.900    CXX test/cpp_headers/histogram_data.o
00:03:30.158    CC test/nvme/fused_ordering/fused_ordering.o
00:03:30.158    CC test/nvme/doorbell_aers/doorbell_aers.o
00:03:30.158    CXX test/cpp_headers/idxd.o
00:03:30.158    LINK boot_partition
00:03:30.158    CC test/nvme/fdp/fdp.o
00:03:30.417    CC test/nvme/cuse/cuse.o
00:03:30.417    CXX test/cpp_headers/idxd_spec.o
00:03:30.417    CXX test/cpp_headers/init.o
00:03:30.417    CXX test/cpp_headers/ioat.o
00:03:30.417    LINK nvme_compliance
00:03:30.417    LINK fused_ordering
00:03:30.417    LINK doorbell_aers
00:03:30.675    CXX test/cpp_headers/ioat_spec.o
00:03:30.675    LINK fdp
00:03:30.675    CXX test/cpp_headers/iscsi_spec.o
00:03:30.675    CXX test/cpp_headers/json.o
00:03:30.675    CXX test/cpp_headers/jsonrpc.o
00:03:30.675    CXX test/cpp_headers/likely.o
00:03:30.675    CXX test/cpp_headers/lvol.o
00:03:30.675    CXX test/cpp_headers/log.o
00:03:30.675    CXX test/cpp_headers/memory.o
00:03:30.933    CXX test/cpp_headers/mmio.o
00:03:30.933    CXX test/cpp_headers/nbd.o
00:03:30.933    CXX test/cpp_headers/notify.o
00:03:30.933    CXX test/cpp_headers/nvme.o
00:03:30.933    CXX test/cpp_headers/nvme_intel.o
00:03:30.933    CXX test/cpp_headers/nvme_ocssd.o
00:03:30.933    CXX test/cpp_headers/nvme_ocssd_spec.o
00:03:30.933    CXX test/cpp_headers/nvme_spec.o
00:03:31.191    CXX test/cpp_headers/nvme_zns.o
00:03:31.191    CXX test/cpp_headers/nvmf_cmd.o
00:03:31.191    CXX test/cpp_headers/nvmf_fc_spec.o
00:03:31.191    CXX test/cpp_headers/nvmf.o
00:03:31.191    CXX test/cpp_headers/nvmf_spec.o
00:03:31.191    CXX test/cpp_headers/nvmf_transport.o
00:03:31.191    CXX test/cpp_headers/opal.o
00:03:31.191    CXX test/cpp_headers/opal_spec.o
00:03:31.191    CXX test/cpp_headers/pipe.o
00:03:31.191    CXX test/cpp_headers/pci_ids.o
00:03:31.449    CXX test/cpp_headers/queue.o
00:03:31.449    CXX test/cpp_headers/reduce.o
00:03:31.449    CXX test/cpp_headers/rpc.o
00:03:31.449    CXX test/cpp_headers/scheduler.o
00:03:31.449    CXX test/cpp_headers/scsi.o
00:03:31.449    CXX test/cpp_headers/scsi_spec.o
00:03:31.449    CXX test/cpp_headers/sock.o
00:03:31.449    CXX test/cpp_headers/stdinc.o
00:03:31.449    LINK cuse
00:03:31.708    CXX test/cpp_headers/string.o
00:03:31.708    CXX test/cpp_headers/thread.o
00:03:31.708    CXX test/cpp_headers/trace.o
00:03:31.708    CXX test/cpp_headers/trace_parser.o
00:03:31.708    CXX test/cpp_headers/tree.o
00:03:31.708    CXX test/cpp_headers/ublk.o
00:03:31.708    CXX test/cpp_headers/util.o
00:03:31.708    CXX test/cpp_headers/uuid.o
00:03:31.708    CXX test/cpp_headers/version.o
00:03:31.708    CXX test/cpp_headers/vfio_user_pci.o
00:03:31.966    CXX test/cpp_headers/vfio_user_spec.o
00:03:31.966    CXX test/cpp_headers/vhost.o
00:03:31.966    CXX test/cpp_headers/vmd.o
00:03:31.966    CXX test/cpp_headers/xor.o
00:03:31.966    CXX test/cpp_headers/zipf.o
00:03:33.343    LINK esnap
00:03:33.343  
00:03:33.343  real	1m0.159s
00:03:33.343  user	6m25.618s
00:03:33.343  sys	1m32.449s
00:03:33.343   06:13:50	-- common/autotest_common.sh@1115 -- $ xtrace_disable
00:03:33.343   06:13:50	-- common/autotest_common.sh@10 -- $ set +x
00:03:33.343  ************************************
00:03:33.343  END TEST make
00:03:33.343  ************************************
00:03:33.601    06:13:50	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:03:33.601     06:13:50	-- common/autotest_common.sh@1690 -- # lcov --version
00:03:33.601     06:13:50	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:03:33.601    06:13:50	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:03:33.601    06:13:50	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:03:33.601    06:13:50	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:03:33.601    06:13:50	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:03:33.601    06:13:50	-- scripts/common.sh@335 -- # IFS=.-:
00:03:33.601    06:13:50	-- scripts/common.sh@335 -- # read -ra ver1
00:03:33.601    06:13:50	-- scripts/common.sh@336 -- # IFS=.-:
00:03:33.601    06:13:50	-- scripts/common.sh@336 -- # read -ra ver2
00:03:33.601    06:13:50	-- scripts/common.sh@337 -- # local 'op=<'
00:03:33.601    06:13:50	-- scripts/common.sh@339 -- # ver1_l=2
00:03:33.601    06:13:50	-- scripts/common.sh@340 -- # ver2_l=1
00:03:33.601    06:13:50	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:03:33.601    06:13:50	-- scripts/common.sh@343 -- # case "$op" in
00:03:33.601    06:13:50	-- scripts/common.sh@344 -- # : 1
00:03:33.601    06:13:50	-- scripts/common.sh@363 -- # (( v = 0 ))
00:03:33.601    06:13:50	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:33.601     06:13:50	-- scripts/common.sh@364 -- # decimal 1
00:03:33.601     06:13:50	-- scripts/common.sh@352 -- # local d=1
00:03:33.601     06:13:50	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:03:33.601     06:13:50	-- scripts/common.sh@354 -- # echo 1
00:03:33.601    06:13:50	-- scripts/common.sh@364 -- # ver1[v]=1
00:03:33.601     06:13:50	-- scripts/common.sh@365 -- # decimal 2
00:03:33.601     06:13:50	-- scripts/common.sh@352 -- # local d=2
00:03:33.601     06:13:50	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:03:33.601     06:13:50	-- scripts/common.sh@354 -- # echo 2
00:03:33.601    06:13:50	-- scripts/common.sh@365 -- # ver2[v]=2
00:03:33.601    06:13:50	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:03:33.601    06:13:50	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:03:33.601    06:13:50	-- scripts/common.sh@367 -- # return 0
00:03:33.601    06:13:50	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:03:33.601    06:13:50	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:03:33.601  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:33.601  		--rc genhtml_branch_coverage=1
00:03:33.601  		--rc genhtml_function_coverage=1
00:03:33.601  		--rc genhtml_legend=1
00:03:33.601  		--rc geninfo_all_blocks=1
00:03:33.601  		--rc geninfo_unexecuted_blocks=1
00:03:33.601  		
00:03:33.601  		'
00:03:33.601    06:13:50	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:03:33.601  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:33.601  		--rc genhtml_branch_coverage=1
00:03:33.601  		--rc genhtml_function_coverage=1
00:03:33.601  		--rc genhtml_legend=1
00:03:33.601  		--rc geninfo_all_blocks=1
00:03:33.601  		--rc geninfo_unexecuted_blocks=1
00:03:33.601  		
00:03:33.601  		'
00:03:33.601    06:13:50	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:03:33.601  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:33.601  		--rc genhtml_branch_coverage=1
00:03:33.601  		--rc genhtml_function_coverage=1
00:03:33.601  		--rc genhtml_legend=1
00:03:33.601  		--rc geninfo_all_blocks=1
00:03:33.601  		--rc geninfo_unexecuted_blocks=1
00:03:33.601  		
00:03:33.601  		'
00:03:33.601    06:13:50	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:03:33.601  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:33.601  		--rc genhtml_branch_coverage=1
00:03:33.601  		--rc genhtml_function_coverage=1
00:03:33.601  		--rc genhtml_legend=1
00:03:33.601  		--rc geninfo_all_blocks=1
00:03:33.601  		--rc geninfo_unexecuted_blocks=1
00:03:33.601  		
00:03:33.601  		'
00:03:33.601   06:13:50	-- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:03:33.601     06:13:50	-- nvmf/common.sh@7 -- # uname -s
00:03:33.601    06:13:50	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:03:33.601    06:13:50	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:03:33.601    06:13:50	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:03:33.601    06:13:50	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:03:33.601    06:13:50	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:03:33.601    06:13:50	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:03:33.601    06:13:50	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:03:33.601    06:13:50	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:03:33.601    06:13:50	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:03:33.601     06:13:50	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:03:33.601    06:13:50	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:03:33.601    06:13:50	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:03:33.601    06:13:50	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:03:33.601    06:13:50	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:03:33.601    06:13:50	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:03:33.601    06:13:50	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:03:33.601     06:13:50	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:03:33.601     06:13:50	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:03:33.601     06:13:50	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:03:33.601      06:13:50	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:33.601      06:13:50	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:33.601      06:13:50	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:33.601      06:13:50	-- paths/export.sh@5 -- # export PATH
00:03:33.601      06:13:50	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:33.601    06:13:50	-- nvmf/common.sh@46 -- # : 0
00:03:33.601    06:13:50	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:03:33.601    06:13:50	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:03:33.601    06:13:50	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:03:33.601    06:13:50	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:03:33.601    06:13:50	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:03:33.601    06:13:50	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:03:33.601    06:13:50	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:03:33.601    06:13:50	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:03:33.601   06:13:50	-- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']'
00:03:33.601    06:13:50	-- spdk/autotest.sh@32 -- # uname -s
00:03:33.601   06:13:50	-- spdk/autotest.sh@32 -- # '[' Linux = Linux ']'
00:03:33.601   06:13:50	-- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h'
00:03:33.601   06:13:50	-- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps
00:03:33.601   06:13:50	-- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t'
00:03:33.601   06:13:50	-- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps
00:03:33.601   06:13:50	-- spdk/autotest.sh@44 -- # modprobe nbd
00:03:33.601    06:13:50	-- spdk/autotest.sh@46 -- # type -P udevadm
00:03:33.601   06:13:50	-- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm
00:03:33.601   06:13:50	-- spdk/autotest.sh@48 -- # udevadm_pid=49739
00:03:33.601   06:13:50	-- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power
00:03:33.601   06:13:50	-- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property
00:03:33.601   06:13:50	-- spdk/autotest.sh@54 -- # echo 49748
00:03:33.601   06:13:50	-- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power
00:03:33.859   06:13:50	-- spdk/autotest.sh@56 -- # echo 49751
00:03:33.859   06:13:50	-- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power
00:03:33.859   06:13:50	-- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]]
00:03:33.859   06:13:50	-- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT
00:03:33.859   06:13:50	-- spdk/autotest.sh@68 -- # timing_enter autotest
00:03:33.859   06:13:50	-- common/autotest_common.sh@722 -- # xtrace_disable
00:03:33.859   06:13:50	-- common/autotest_common.sh@10 -- # set +x
00:03:33.859   06:13:50	-- spdk/autotest.sh@70 -- # create_test_list
00:03:33.859   06:13:50	-- common/autotest_common.sh@746 -- # xtrace_disable
00:03:33.859   06:13:50	-- common/autotest_common.sh@10 -- # set +x
00:03:33.859     06:13:50	-- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh
00:03:33.859    06:13:50	-- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk
00:03:33.859   06:13:50	-- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk
00:03:33.859   06:13:50	-- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output
00:03:33.859   06:13:50	-- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk
00:03:33.859   06:13:50	-- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod
00:03:33.859    06:13:50	-- common/autotest_common.sh@1450 -- # uname
00:03:33.859   06:13:50	-- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']'
00:03:33.859   06:13:50	-- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf
00:03:33.859    06:13:50	-- common/autotest_common.sh@1470 -- # uname
00:03:33.859   06:13:50	-- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]]
00:03:33.859   06:13:50	-- spdk/autotest.sh@79 -- # [[ y == y ]]
00:03:33.859   06:13:50	-- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version
00:03:33.859  lcov: LCOV version 1.15
00:03:33.859   06:13:50	-- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info
00:03:41.971  /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found
00:03:41.971  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno
00:03:41.971  /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found
00:03:41.971  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno
00:03:41.971  /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found
00:03:41.971  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno
00:04:00.055   06:14:15	-- spdk/autotest.sh@87 -- # timing_enter pre_cleanup
00:04:00.055   06:14:15	-- common/autotest_common.sh@722 -- # xtrace_disable
00:04:00.055   06:14:15	-- common/autotest_common.sh@10 -- # set +x
00:04:00.055   06:14:15	-- spdk/autotest.sh@89 -- # rm -f
00:04:00.055   06:14:15	-- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:04:00.055  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:04:00.055  0000:00:06.0 (1b36 0010): Already using the nvme driver
00:04:00.055  0000:00:07.0 (1b36 0010): Already using the nvme driver
00:04:00.055   06:14:16	-- spdk/autotest.sh@94 -- # get_zoned_devs
00:04:00.055   06:14:16	-- common/autotest_common.sh@1664 -- # zoned_devs=()
00:04:00.055   06:14:16	-- common/autotest_common.sh@1664 -- # local -gA zoned_devs
00:04:00.055   06:14:16	-- common/autotest_common.sh@1665 -- # local nvme bdf
00:04:00.055   06:14:16	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:04:00.055   06:14:16	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1
00:04:00.055   06:14:16	-- common/autotest_common.sh@1657 -- # local device=nvme0n1
00:04:00.055   06:14:16	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:04:00.055   06:14:16	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:04:00.055   06:14:16	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:04:00.055   06:14:16	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n2
00:04:00.055   06:14:16	-- common/autotest_common.sh@1657 -- # local device=nvme0n2
00:04:00.055   06:14:16	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]]
00:04:00.055   06:14:16	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:04:00.055   06:14:16	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:04:00.055   06:14:16	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n3
00:04:00.055   06:14:16	-- common/autotest_common.sh@1657 -- # local device=nvme0n3
00:04:00.055   06:14:16	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]]
00:04:00.055   06:14:16	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:04:00.055   06:14:16	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:04:00.055   06:14:16	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1
00:04:00.055   06:14:16	-- common/autotest_common.sh@1657 -- # local device=nvme1n1
00:04:00.055   06:14:16	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]]
00:04:00.055   06:14:16	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:04:00.055   06:14:16	-- spdk/autotest.sh@96 -- # (( 0 > 0 ))
00:04:00.055    06:14:16	-- spdk/autotest.sh@108 -- # grep -v p
00:04:00.055    06:14:16	-- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme0n2 /dev/nvme0n3 /dev/nvme1n1
00:04:00.055   06:14:16	-- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true)
00:04:00.055   06:14:16	-- spdk/autotest.sh@110 -- # [[ -z '' ]]
00:04:00.055   06:14:16	-- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1
00:04:00.055   06:14:16	-- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt
00:04:00.055   06:14:16	-- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1
00:04:00.055  No valid GPT data, bailing
00:04:00.055    06:14:16	-- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:04:00.055   06:14:16	-- scripts/common.sh@393 -- # pt=
00:04:00.055   06:14:16	-- scripts/common.sh@394 -- # return 1
00:04:00.055   06:14:16	-- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1
00:04:00.055  1+0 records in
00:04:00.055  1+0 records out
00:04:00.055  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00366256 s, 286 MB/s
00:04:00.055   06:14:16	-- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true)
00:04:00.055   06:14:16	-- spdk/autotest.sh@110 -- # [[ -z '' ]]
00:04:00.055   06:14:16	-- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n2
00:04:00.055   06:14:16	-- scripts/common.sh@380 -- # local block=/dev/nvme0n2 pt
00:04:00.055   06:14:16	-- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2
00:04:00.055  No valid GPT data, bailing
00:04:00.055    06:14:16	-- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n2
00:04:00.055   06:14:16	-- scripts/common.sh@393 -- # pt=
00:04:00.055   06:14:16	-- scripts/common.sh@394 -- # return 1
00:04:00.055   06:14:16	-- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1
00:04:00.055  1+0 records in
00:04:00.055  1+0 records out
00:04:00.055  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00334814 s, 313 MB/s
00:04:00.055   06:14:16	-- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true)
00:04:00.055   06:14:16	-- spdk/autotest.sh@110 -- # [[ -z '' ]]
00:04:00.055   06:14:16	-- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n3
00:04:00.055   06:14:16	-- scripts/common.sh@380 -- # local block=/dev/nvme0n3 pt
00:04:00.055   06:14:16	-- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3
00:04:00.055  No valid GPT data, bailing
00:04:00.055    06:14:16	-- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n3
00:04:00.055   06:14:16	-- scripts/common.sh@393 -- # pt=
00:04:00.055   06:14:16	-- scripts/common.sh@394 -- # return 1
00:04:00.055   06:14:16	-- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1
00:04:00.055  1+0 records in
00:04:00.055  1+0 records out
00:04:00.055  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00410309 s, 256 MB/s
00:04:00.055   06:14:16	-- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true)
00:04:00.055   06:14:16	-- spdk/autotest.sh@110 -- # [[ -z '' ]]
00:04:00.055   06:14:16	-- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1
00:04:00.055   06:14:16	-- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt
00:04:00.055   06:14:16	-- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1
00:04:00.055  No valid GPT data, bailing
00:04:00.055    06:14:16	-- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1
00:04:00.055   06:14:16	-- scripts/common.sh@393 -- # pt=
00:04:00.055   06:14:16	-- scripts/common.sh@394 -- # return 1
00:04:00.055   06:14:16	-- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1
00:04:00.055  1+0 records in
00:04:00.055  1+0 records out
00:04:00.055  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0048259 s, 217 MB/s
00:04:00.055   06:14:16	-- spdk/autotest.sh@116 -- # sync
00:04:00.314   06:14:17	-- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes
00:04:00.314   06:14:17	-- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null'
00:04:00.314    06:14:17	-- common/autotest_common.sh@22 -- # reap_spdk_processes
00:04:02.216    06:14:19	-- spdk/autotest.sh@122 -- # uname -s
00:04:02.216   06:14:19	-- spdk/autotest.sh@122 -- # '[' Linux = Linux ']'
00:04:02.216   06:14:19	-- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh
00:04:02.216   06:14:19	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:02.216   06:14:19	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:02.216   06:14:19	-- common/autotest_common.sh@10 -- # set +x
00:04:02.216  ************************************
00:04:02.217  START TEST setup.sh
00:04:02.217  ************************************
00:04:02.217   06:14:19	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh
00:04:02.217  * Looking for test storage...
00:04:02.217  * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup
00:04:02.217     06:14:19	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:04:02.217      06:14:19	-- common/autotest_common.sh@1690 -- # lcov --version
00:04:02.217      06:14:19	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:04:02.475     06:14:19	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:04:02.475     06:14:19	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:04:02.475     06:14:19	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:04:02.475     06:14:19	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:04:02.475     06:14:19	-- scripts/common.sh@335 -- # IFS=.-:
00:04:02.475     06:14:19	-- scripts/common.sh@335 -- # read -ra ver1
00:04:02.475     06:14:19	-- scripts/common.sh@336 -- # IFS=.-:
00:04:02.475     06:14:19	-- scripts/common.sh@336 -- # read -ra ver2
00:04:02.475     06:14:19	-- scripts/common.sh@337 -- # local 'op=<'
00:04:02.475     06:14:19	-- scripts/common.sh@339 -- # ver1_l=2
00:04:02.475     06:14:19	-- scripts/common.sh@340 -- # ver2_l=1
00:04:02.475     06:14:19	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:04:02.475     06:14:19	-- scripts/common.sh@343 -- # case "$op" in
00:04:02.475     06:14:19	-- scripts/common.sh@344 -- # : 1
00:04:02.475     06:14:19	-- scripts/common.sh@363 -- # (( v = 0 ))
00:04:02.476     06:14:19	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:02.476      06:14:19	-- scripts/common.sh@364 -- # decimal 1
00:04:02.476      06:14:19	-- scripts/common.sh@352 -- # local d=1
00:04:02.476      06:14:19	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:02.476      06:14:19	-- scripts/common.sh@354 -- # echo 1
00:04:02.476     06:14:19	-- scripts/common.sh@364 -- # ver1[v]=1
00:04:02.476      06:14:19	-- scripts/common.sh@365 -- # decimal 2
00:04:02.476      06:14:19	-- scripts/common.sh@352 -- # local d=2
00:04:02.476      06:14:19	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:02.476      06:14:19	-- scripts/common.sh@354 -- # echo 2
00:04:02.476     06:14:19	-- scripts/common.sh@365 -- # ver2[v]=2
00:04:02.476     06:14:19	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:04:02.476     06:14:19	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:04:02.476     06:14:19	-- scripts/common.sh@367 -- # return 0
00:04:02.476     06:14:19	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:02.476     06:14:19	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:04:02.476  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:02.476  		--rc genhtml_branch_coverage=1
00:04:02.476  		--rc genhtml_function_coverage=1
00:04:02.476  		--rc genhtml_legend=1
00:04:02.476  		--rc geninfo_all_blocks=1
00:04:02.476  		--rc geninfo_unexecuted_blocks=1
00:04:02.476  		
00:04:02.476  		'
00:04:02.476     06:14:19	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:04:02.476  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:02.476  		--rc genhtml_branch_coverage=1
00:04:02.476  		--rc genhtml_function_coverage=1
00:04:02.476  		--rc genhtml_legend=1
00:04:02.476  		--rc geninfo_all_blocks=1
00:04:02.476  		--rc geninfo_unexecuted_blocks=1
00:04:02.476  		
00:04:02.476  		'
00:04:02.476     06:14:19	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:04:02.476  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:02.476  		--rc genhtml_branch_coverage=1
00:04:02.476  		--rc genhtml_function_coverage=1
00:04:02.476  		--rc genhtml_legend=1
00:04:02.476  		--rc geninfo_all_blocks=1
00:04:02.476  		--rc geninfo_unexecuted_blocks=1
00:04:02.476  		
00:04:02.476  		'
00:04:02.476     06:14:19	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:04:02.476  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:02.476  		--rc genhtml_branch_coverage=1
00:04:02.476  		--rc genhtml_function_coverage=1
00:04:02.476  		--rc genhtml_legend=1
00:04:02.476  		--rc geninfo_all_blocks=1
00:04:02.476  		--rc geninfo_unexecuted_blocks=1
00:04:02.476  		
00:04:02.476  		'
00:04:02.476    06:14:19	-- setup/test-setup.sh@10 -- # uname -s
00:04:02.476   06:14:19	-- setup/test-setup.sh@10 -- # [[ Linux == Linux ]]
00:04:02.476   06:14:19	-- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh
00:04:02.476   06:14:19	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:02.476   06:14:19	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:02.476   06:14:19	-- common/autotest_common.sh@10 -- # set +x
00:04:02.476  ************************************
00:04:02.476  START TEST acl
00:04:02.476  ************************************
00:04:02.476   06:14:19	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh
00:04:02.476  * Looking for test storage...
00:04:02.476  * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup
00:04:02.476     06:14:19	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:04:02.476      06:14:19	-- common/autotest_common.sh@1690 -- # lcov --version
00:04:02.476      06:14:19	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:04:02.476     06:14:19	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:04:02.476     06:14:19	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:04:02.476     06:14:19	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:04:02.476     06:14:19	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:04:02.476     06:14:19	-- scripts/common.sh@335 -- # IFS=.-:
00:04:02.476     06:14:19	-- scripts/common.sh@335 -- # read -ra ver1
00:04:02.476     06:14:19	-- scripts/common.sh@336 -- # IFS=.-:
00:04:02.476     06:14:19	-- scripts/common.sh@336 -- # read -ra ver2
00:04:02.735     06:14:19	-- scripts/common.sh@337 -- # local 'op=<'
00:04:02.735     06:14:19	-- scripts/common.sh@339 -- # ver1_l=2
00:04:02.735     06:14:19	-- scripts/common.sh@340 -- # ver2_l=1
00:04:02.735     06:14:19	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:04:02.735     06:14:19	-- scripts/common.sh@343 -- # case "$op" in
00:04:02.735     06:14:19	-- scripts/common.sh@344 -- # : 1
00:04:02.735     06:14:19	-- scripts/common.sh@363 -- # (( v = 0 ))
00:04:02.735     06:14:19	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:02.735      06:14:19	-- scripts/common.sh@364 -- # decimal 1
00:04:02.735      06:14:19	-- scripts/common.sh@352 -- # local d=1
00:04:02.735      06:14:19	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:02.735      06:14:19	-- scripts/common.sh@354 -- # echo 1
00:04:02.735     06:14:19	-- scripts/common.sh@364 -- # ver1[v]=1
00:04:02.735      06:14:19	-- scripts/common.sh@365 -- # decimal 2
00:04:02.735      06:14:19	-- scripts/common.sh@352 -- # local d=2
00:04:02.735      06:14:19	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:02.735      06:14:19	-- scripts/common.sh@354 -- # echo 2
00:04:02.735     06:14:19	-- scripts/common.sh@365 -- # ver2[v]=2
00:04:02.735     06:14:19	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:04:02.735     06:14:19	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:04:02.735     06:14:19	-- scripts/common.sh@367 -- # return 0
00:04:02.735     06:14:19	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:02.735     06:14:19	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:04:02.735  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:02.735  		--rc genhtml_branch_coverage=1
00:04:02.735  		--rc genhtml_function_coverage=1
00:04:02.735  		--rc genhtml_legend=1
00:04:02.735  		--rc geninfo_all_blocks=1
00:04:02.735  		--rc geninfo_unexecuted_blocks=1
00:04:02.735  		
00:04:02.735  		'
00:04:02.735     06:14:19	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:04:02.735  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:02.735  		--rc genhtml_branch_coverage=1
00:04:02.735  		--rc genhtml_function_coverage=1
00:04:02.735  		--rc genhtml_legend=1
00:04:02.735  		--rc geninfo_all_blocks=1
00:04:02.735  		--rc geninfo_unexecuted_blocks=1
00:04:02.735  		
00:04:02.735  		'
00:04:02.735     06:14:19	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:04:02.735  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:02.735  		--rc genhtml_branch_coverage=1
00:04:02.735  		--rc genhtml_function_coverage=1
00:04:02.735  		--rc genhtml_legend=1
00:04:02.735  		--rc geninfo_all_blocks=1
00:04:02.735  		--rc geninfo_unexecuted_blocks=1
00:04:02.735  		
00:04:02.735  		'
00:04:02.735     06:14:19	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:04:02.735  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:02.735  		--rc genhtml_branch_coverage=1
00:04:02.735  		--rc genhtml_function_coverage=1
00:04:02.735  		--rc genhtml_legend=1
00:04:02.735  		--rc geninfo_all_blocks=1
00:04:02.735  		--rc geninfo_unexecuted_blocks=1
00:04:02.735  		
00:04:02.735  		'
00:04:02.735   06:14:19	-- setup/acl.sh@10 -- # get_zoned_devs
00:04:02.735   06:14:19	-- common/autotest_common.sh@1664 -- # zoned_devs=()
00:04:02.735   06:14:19	-- common/autotest_common.sh@1664 -- # local -gA zoned_devs
00:04:02.735   06:14:19	-- common/autotest_common.sh@1665 -- # local nvme bdf
00:04:02.735   06:14:19	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:04:02.735   06:14:19	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1
00:04:02.735   06:14:19	-- common/autotest_common.sh@1657 -- # local device=nvme0n1
00:04:02.736   06:14:19	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:04:02.736   06:14:19	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:04:02.736   06:14:19	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:04:02.736   06:14:19	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n2
00:04:02.736   06:14:19	-- common/autotest_common.sh@1657 -- # local device=nvme0n2
00:04:02.736   06:14:19	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]]
00:04:02.736   06:14:19	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:04:02.736   06:14:19	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:04:02.736   06:14:19	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n3
00:04:02.736   06:14:19	-- common/autotest_common.sh@1657 -- # local device=nvme0n3
00:04:02.736   06:14:19	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]]
00:04:02.736   06:14:19	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:04:02.736   06:14:19	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:04:02.736   06:14:19	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1
00:04:02.736   06:14:19	-- common/autotest_common.sh@1657 -- # local device=nvme1n1
00:04:02.736   06:14:19	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]]
00:04:02.736   06:14:19	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:04:02.736   06:14:19	-- setup/acl.sh@12 -- # devs=()
00:04:02.736   06:14:19	-- setup/acl.sh@12 -- # declare -a devs
00:04:02.736   06:14:19	-- setup/acl.sh@13 -- # drivers=()
00:04:02.736   06:14:19	-- setup/acl.sh@13 -- # declare -A drivers
00:04:02.736   06:14:19	-- setup/acl.sh@51 -- # setup reset
00:04:02.736   06:14:19	-- setup/common.sh@9 -- # [[ reset == output ]]
00:04:02.736   06:14:19	-- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:04:03.303   06:14:20	-- setup/acl.sh@52 -- # collect_setup_devs
00:04:03.303   06:14:20	-- setup/acl.sh@16 -- # local dev driver
00:04:03.303   06:14:20	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:03.303    06:14:20	-- setup/acl.sh@15 -- # setup output status
00:04:03.303    06:14:20	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:03.303    06:14:20	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:04:03.562  Hugepages
00:04:03.562  node     hugesize     free /  total
00:04:03.562   06:14:20	-- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]]
00:04:03.562   06:14:20	-- setup/acl.sh@19 -- # continue
00:04:03.562   06:14:20	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:03.562  
00:04:03.562  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:04:03.562   06:14:20	-- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]]
00:04:03.562   06:14:20	-- setup/acl.sh@19 -- # continue
00:04:03.562   06:14:20	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:03.562   06:14:20	-- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]]
00:04:03.562   06:14:20	-- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]]
00:04:03.562   06:14:20	-- setup/acl.sh@20 -- # continue
00:04:03.562   06:14:20	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:03.562   06:14:20	-- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]]
00:04:03.562   06:14:20	-- setup/acl.sh@20 -- # [[ nvme == nvme ]]
00:04:03.562   06:14:20	-- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]]
00:04:03.562   06:14:20	-- setup/acl.sh@22 -- # devs+=("$dev")
00:04:03.562   06:14:20	-- setup/acl.sh@22 -- # drivers["$dev"]=nvme
00:04:03.562   06:14:20	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:03.562   06:14:20	-- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]]
00:04:03.562   06:14:20	-- setup/acl.sh@20 -- # [[ nvme == nvme ]]
00:04:03.562   06:14:20	-- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]]
00:04:03.820   06:14:20	-- setup/acl.sh@22 -- # devs+=("$dev")
00:04:03.820   06:14:20	-- setup/acl.sh@22 -- # drivers["$dev"]=nvme
00:04:03.820   06:14:20	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:03.820   06:14:20	-- setup/acl.sh@24 -- # (( 2 > 0 ))
00:04:03.820   06:14:20	-- setup/acl.sh@54 -- # run_test denied denied
00:04:03.820   06:14:20	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:03.820   06:14:20	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:03.820   06:14:20	-- common/autotest_common.sh@10 -- # set +x
00:04:03.820  ************************************
00:04:03.820  START TEST denied
00:04:03.820  ************************************
00:04:03.820   06:14:20	-- common/autotest_common.sh@1114 -- # denied
00:04:03.820   06:14:20	-- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0'
00:04:03.820   06:14:20	-- setup/acl.sh@38 -- # setup output config
00:04:03.820   06:14:20	-- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0'
00:04:03.820   06:14:20	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:03.820   06:14:20	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config
00:04:04.756  0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0
00:04:04.756   06:14:21	-- setup/acl.sh@40 -- # verify 0000:00:06.0
00:04:04.756   06:14:21	-- setup/acl.sh@28 -- # local dev driver
00:04:04.756   06:14:21	-- setup/acl.sh@30 -- # for dev in "$@"
00:04:04.756   06:14:21	-- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]]
00:04:04.756    06:14:21	-- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver
00:04:04.756   06:14:21	-- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme
00:04:04.756   06:14:21	-- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]]
00:04:04.756   06:14:21	-- setup/acl.sh@41 -- # setup reset
00:04:04.756   06:14:21	-- setup/common.sh@9 -- # [[ reset == output ]]
00:04:04.756   06:14:21	-- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:04:05.015  
00:04:05.015  real	0m1.430s
00:04:05.015  user	0m0.578s
00:04:05.015  sys	0m0.803s
00:04:05.015   06:14:21	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:05.015   06:14:21	-- common/autotest_common.sh@10 -- # set +x
00:04:05.015  ************************************
00:04:05.015  END TEST denied
00:04:05.015  ************************************
00:04:05.273   06:14:22	-- setup/acl.sh@55 -- # run_test allowed allowed
00:04:05.273   06:14:22	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:05.273   06:14:22	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:05.273   06:14:22	-- common/autotest_common.sh@10 -- # set +x
00:04:05.273  ************************************
00:04:05.273  START TEST allowed
00:04:05.273  ************************************
00:04:05.273   06:14:22	-- common/autotest_common.sh@1114 -- # allowed
00:04:05.273   06:14:22	-- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0
00:04:05.273   06:14:22	-- setup/acl.sh@45 -- # setup output config
00:04:05.273   06:14:22	-- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*'
00:04:05.273   06:14:22	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:05.273   06:14:22	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config
00:04:05.841  0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic
00:04:05.841   06:14:22	-- setup/acl.sh@47 -- # verify 0000:00:07.0
00:04:05.841   06:14:22	-- setup/acl.sh@28 -- # local dev driver
00:04:05.841   06:14:22	-- setup/acl.sh@30 -- # for dev in "$@"
00:04:05.841   06:14:22	-- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]]
00:04:05.841    06:14:22	-- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver
00:04:05.841   06:14:22	-- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme
00:04:05.841   06:14:22	-- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]]
00:04:05.841   06:14:22	-- setup/acl.sh@48 -- # setup reset
00:04:05.841   06:14:22	-- setup/common.sh@9 -- # [[ reset == output ]]
00:04:05.841   06:14:22	-- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:04:06.777  ************************************
00:04:06.777  END TEST allowed
00:04:06.777  ************************************
00:04:06.777  
00:04:06.777  real	0m1.496s
00:04:06.777  user	0m0.699s
00:04:06.777  sys	0m0.800s
00:04:06.777   06:14:23	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:06.777   06:14:23	-- common/autotest_common.sh@10 -- # set +x
00:04:06.777  
00:04:06.777  real	0m4.273s
00:04:06.777  user	0m1.937s
00:04:06.777  sys	0m2.317s
00:04:06.777   06:14:23	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:06.777   06:14:23	-- common/autotest_common.sh@10 -- # set +x
00:04:06.777  ************************************
00:04:06.777  END TEST acl
00:04:06.777  ************************************
00:04:06.777   06:14:23	-- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh
00:04:06.777   06:14:23	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:06.777   06:14:23	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:06.777   06:14:23	-- common/autotest_common.sh@10 -- # set +x
00:04:06.777  ************************************
00:04:06.777  START TEST hugepages
00:04:06.777  ************************************
00:04:06.777   06:14:23	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh
00:04:06.777  * Looking for test storage...
00:04:06.777  * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup
00:04:06.777     06:14:23	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:04:06.777      06:14:23	-- common/autotest_common.sh@1690 -- # lcov --version
00:04:06.777      06:14:23	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:04:07.037     06:14:23	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:04:07.037     06:14:23	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:04:07.037     06:14:23	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:04:07.037     06:14:23	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:04:07.037     06:14:23	-- scripts/common.sh@335 -- # IFS=.-:
00:04:07.037     06:14:23	-- scripts/common.sh@335 -- # read -ra ver1
00:04:07.037     06:14:23	-- scripts/common.sh@336 -- # IFS=.-:
00:04:07.037     06:14:23	-- scripts/common.sh@336 -- # read -ra ver2
00:04:07.037     06:14:23	-- scripts/common.sh@337 -- # local 'op=<'
00:04:07.037     06:14:23	-- scripts/common.sh@339 -- # ver1_l=2
00:04:07.037     06:14:23	-- scripts/common.sh@340 -- # ver2_l=1
00:04:07.037     06:14:23	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:04:07.037     06:14:23	-- scripts/common.sh@343 -- # case "$op" in
00:04:07.037     06:14:23	-- scripts/common.sh@344 -- # : 1
00:04:07.037     06:14:23	-- scripts/common.sh@363 -- # (( v = 0 ))
00:04:07.037     06:14:23	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:07.037      06:14:23	-- scripts/common.sh@364 -- # decimal 1
00:04:07.037      06:14:23	-- scripts/common.sh@352 -- # local d=1
00:04:07.037      06:14:23	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:07.037      06:14:23	-- scripts/common.sh@354 -- # echo 1
00:04:07.037     06:14:23	-- scripts/common.sh@364 -- # ver1[v]=1
00:04:07.037      06:14:23	-- scripts/common.sh@365 -- # decimal 2
00:04:07.037      06:14:23	-- scripts/common.sh@352 -- # local d=2
00:04:07.037      06:14:23	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:07.037      06:14:23	-- scripts/common.sh@354 -- # echo 2
00:04:07.037     06:14:23	-- scripts/common.sh@365 -- # ver2[v]=2
00:04:07.037     06:14:23	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:04:07.037     06:14:23	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:04:07.037     06:14:23	-- scripts/common.sh@367 -- # return 0
00:04:07.037     06:14:23	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:07.037     06:14:23	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:04:07.037  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:07.037  		--rc genhtml_branch_coverage=1
00:04:07.037  		--rc genhtml_function_coverage=1
00:04:07.037  		--rc genhtml_legend=1
00:04:07.037  		--rc geninfo_all_blocks=1
00:04:07.037  		--rc geninfo_unexecuted_blocks=1
00:04:07.037  		
00:04:07.037  		'
00:04:07.037     06:14:23	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:04:07.037  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:07.037  		--rc genhtml_branch_coverage=1
00:04:07.037  		--rc genhtml_function_coverage=1
00:04:07.037  		--rc genhtml_legend=1
00:04:07.037  		--rc geninfo_all_blocks=1
00:04:07.037  		--rc geninfo_unexecuted_blocks=1
00:04:07.037  		
00:04:07.037  		'
00:04:07.037     06:14:23	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:04:07.037  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:07.037  		--rc genhtml_branch_coverage=1
00:04:07.037  		--rc genhtml_function_coverage=1
00:04:07.037  		--rc genhtml_legend=1
00:04:07.037  		--rc geninfo_all_blocks=1
00:04:07.037  		--rc geninfo_unexecuted_blocks=1
00:04:07.037  		
00:04:07.037  		'
00:04:07.037     06:14:23	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:04:07.037  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:07.037  		--rc genhtml_branch_coverage=1
00:04:07.037  		--rc genhtml_function_coverage=1
00:04:07.037  		--rc genhtml_legend=1
00:04:07.037  		--rc geninfo_all_blocks=1
00:04:07.037  		--rc geninfo_unexecuted_blocks=1
00:04:07.037  		
00:04:07.037  		'
00:04:07.037   06:14:23	-- setup/hugepages.sh@10 -- # nodes_sys=()
00:04:07.037   06:14:23	-- setup/hugepages.sh@10 -- # declare -a nodes_sys
00:04:07.037   06:14:23	-- setup/hugepages.sh@12 -- # declare -i default_hugepages=0
00:04:07.037   06:14:23	-- setup/hugepages.sh@13 -- # declare -i no_nodes=0
00:04:07.037   06:14:23	-- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0
00:04:07.037    06:14:23	-- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize
00:04:07.037    06:14:23	-- setup/common.sh@17 -- # local get=Hugepagesize
00:04:07.037    06:14:23	-- setup/common.sh@18 -- # local node=
00:04:07.037    06:14:23	-- setup/common.sh@19 -- # local var val
00:04:07.037    06:14:23	-- setup/common.sh@20 -- # local mem_f mem
00:04:07.037    06:14:23	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:07.037    06:14:23	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:07.037    06:14:23	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:07.037    06:14:23	-- setup/common.sh@28 -- # mapfile -t mem
00:04:07.037    06:14:23	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:07.037    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.037    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.037     06:14:23	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         5850528 kB' 'MemAvailable:    7362392 kB' 'Buffers:            2684 kB' 'Cached:          1722744 kB' 'SwapCached:            0 kB' 'Active:           496408 kB' 'Inactive:        1345748 kB' 'Active(anon):     127236 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345748 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               320 kB' 'Writeback:             0 kB' 'AnonPages:        118348 kB' 'Mapped:            50760 kB' 'Shmem:             10508 kB' 'KReclaimable:      68188 kB' 'Slab:             163180 kB' 'SReclaimable:      68188 kB' 'SUnreclaim:        94992 kB' 'KernelStack:        6480 kB' 'PageTables:         4408 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    12411008 kB' 'Committed_AS:     311532 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55192 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    2048' 'HugePages_Free:     2048' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         4194304 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:07.037    06:14:23	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.037    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.037    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.037    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.037    06:14:23	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.037    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.037    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.037    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.037    06:14:23	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.037    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.037    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.037    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.037    06:14:23	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.037    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.037    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.037    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.038    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.038    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.039    06:14:23	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.039    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.039    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.039    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.039    06:14:23	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.039    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.039    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.039    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.039    06:14:23	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.039    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.039    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.039    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.039    06:14:23	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.039    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.039    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.039    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.039    06:14:23	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.039    06:14:23	-- setup/common.sh@32 -- # continue
00:04:07.039    06:14:23	-- setup/common.sh@31 -- # IFS=': '
00:04:07.039    06:14:23	-- setup/common.sh@31 -- # read -r var val _
00:04:07.039    06:14:23	-- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:07.039    06:14:23	-- setup/common.sh@33 -- # echo 2048
00:04:07.039    06:14:23	-- setup/common.sh@33 -- # return 0
00:04:07.039   06:14:23	-- setup/hugepages.sh@16 -- # default_hugepages=2048
00:04:07.039   06:14:23	-- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
00:04:07.039   06:14:23	-- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages
00:04:07.039   06:14:23	-- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC
00:04:07.039   06:14:23	-- setup/hugepages.sh@22 -- # unset -v HUGEMEM
00:04:07.039   06:14:23	-- setup/hugepages.sh@23 -- # unset -v HUGENODE
00:04:07.039   06:14:23	-- setup/hugepages.sh@24 -- # unset -v NRHUGE
00:04:07.039   06:14:23	-- setup/hugepages.sh@207 -- # get_nodes
00:04:07.039   06:14:23	-- setup/hugepages.sh@27 -- # local node
00:04:07.039   06:14:23	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:04:07.039   06:14:23	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048
00:04:07.039   06:14:23	-- setup/hugepages.sh@32 -- # no_nodes=1
00:04:07.039   06:14:23	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:04:07.039   06:14:23	-- setup/hugepages.sh@208 -- # clear_hp
00:04:07.039   06:14:23	-- setup/hugepages.sh@37 -- # local node hp
00:04:07.039   06:14:23	-- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}"
00:04:07.039   06:14:23	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:04:07.039   06:14:23	-- setup/hugepages.sh@41 -- # echo 0
00:04:07.039   06:14:23	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:04:07.039   06:14:23	-- setup/hugepages.sh@41 -- # echo 0
00:04:07.039   06:14:23	-- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes
00:04:07.039   06:14:23	-- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes
00:04:07.039   06:14:23	-- setup/hugepages.sh@210 -- # run_test default_setup default_setup
00:04:07.039   06:14:23	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:07.039   06:14:23	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:07.039   06:14:23	-- common/autotest_common.sh@10 -- # set +x
00:04:07.039  ************************************
00:04:07.039  START TEST default_setup
00:04:07.039  ************************************
00:04:07.039   06:14:23	-- common/autotest_common.sh@1114 -- # default_setup
00:04:07.039   06:14:23	-- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0
00:04:07.039   06:14:23	-- setup/hugepages.sh@49 -- # local size=2097152
00:04:07.039   06:14:23	-- setup/hugepages.sh@50 -- # (( 2 > 1 ))
00:04:07.039   06:14:23	-- setup/hugepages.sh@51 -- # shift
00:04:07.039   06:14:23	-- setup/hugepages.sh@52 -- # node_ids=('0')
00:04:07.039   06:14:23	-- setup/hugepages.sh@52 -- # local node_ids
00:04:07.039   06:14:23	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:04:07.039   06:14:23	-- setup/hugepages.sh@57 -- # nr_hugepages=1024
00:04:07.039   06:14:23	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0
00:04:07.039   06:14:23	-- setup/hugepages.sh@62 -- # user_nodes=('0')
00:04:07.039   06:14:23	-- setup/hugepages.sh@62 -- # local user_nodes
00:04:07.039   06:14:23	-- setup/hugepages.sh@64 -- # local _nr_hugepages=1024
00:04:07.039   06:14:23	-- setup/hugepages.sh@65 -- # local _no_nodes=1
00:04:07.039   06:14:23	-- setup/hugepages.sh@67 -- # nodes_test=()
00:04:07.039   06:14:23	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:04:07.039   06:14:23	-- setup/hugepages.sh@69 -- # (( 1 > 0 ))
00:04:07.039   06:14:23	-- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}"
00:04:07.039   06:14:23	-- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024
00:04:07.039   06:14:23	-- setup/hugepages.sh@73 -- # return 0
00:04:07.039   06:14:23	-- setup/hugepages.sh@137 -- # setup output
00:04:07.039   06:14:23	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:07.039   06:14:23	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:04:07.610  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:04:07.610  0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic
00:04:07.872  0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic
00:04:07.872   06:14:24	-- setup/hugepages.sh@138 -- # verify_nr_hugepages
00:04:07.872   06:14:24	-- setup/hugepages.sh@89 -- # local node
00:04:07.872   06:14:24	-- setup/hugepages.sh@90 -- # local sorted_t
00:04:07.872   06:14:24	-- setup/hugepages.sh@91 -- # local sorted_s
00:04:07.872   06:14:24	-- setup/hugepages.sh@92 -- # local surp
00:04:07.872   06:14:24	-- setup/hugepages.sh@93 -- # local resv
00:04:07.872   06:14:24	-- setup/hugepages.sh@94 -- # local anon
00:04:07.872   06:14:24	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:04:07.872    06:14:24	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:04:07.872    06:14:24	-- setup/common.sh@17 -- # local get=AnonHugePages
00:04:07.872    06:14:24	-- setup/common.sh@18 -- # local node=
00:04:07.872    06:14:24	-- setup/common.sh@19 -- # local var val
00:04:07.872    06:14:24	-- setup/common.sh@20 -- # local mem_f mem
00:04:07.872    06:14:24	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:07.872    06:14:24	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:07.872    06:14:24	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:07.872    06:14:24	-- setup/common.sh@28 -- # mapfile -t mem
00:04:07.872    06:14:24	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:07.872    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.872    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.872     06:14:24	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         7943792 kB' 'MemAvailable:    9455492 kB' 'Buffers:            2684 kB' 'Cached:          1722732 kB' 'SwapCached:            0 kB' 'Active:           498160 kB' 'Inactive:        1345760 kB' 'Active(anon):     128988 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345760 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               340 kB' 'Writeback:             0 kB' 'AnonPages:        120104 kB' 'Mapped:            50836 kB' 'Shmem:             10484 kB' 'KReclaimable:      67840 kB' 'Slab:             162868 kB' 'SReclaimable:      67840 kB' 'SUnreclaim:        95028 kB' 'KernelStack:        6448 kB' 'PageTables:         4344 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13459584 kB' 'Committed_AS:     312644 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55176 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:07.872    06:14:24	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.872    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.872    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.872    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.872    06:14:24	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.872    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.872    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.872    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.872    06:14:24	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.872    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.872    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.873    06:14:24	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:07.873    06:14:24	-- setup/common.sh@33 -- # echo 0
00:04:07.873    06:14:24	-- setup/common.sh@33 -- # return 0
00:04:07.873   06:14:24	-- setup/hugepages.sh@97 -- # anon=0
00:04:07.873    06:14:24	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:04:07.873    06:14:24	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:07.873    06:14:24	-- setup/common.sh@18 -- # local node=
00:04:07.873    06:14:24	-- setup/common.sh@19 -- # local var val
00:04:07.873    06:14:24	-- setup/common.sh@20 -- # local mem_f mem
00:04:07.873    06:14:24	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:07.873    06:14:24	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:07.873    06:14:24	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:07.873    06:14:24	-- setup/common.sh@28 -- # mapfile -t mem
00:04:07.873    06:14:24	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:07.873    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874     06:14:24	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         7943792 kB' 'MemAvailable:    9455472 kB' 'Buffers:            2684 kB' 'Cached:          1722736 kB' 'SwapCached:            0 kB' 'Active:           498040 kB' 'Inactive:        1345764 kB' 'Active(anon):     128868 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345764 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               340 kB' 'Writeback:             0 kB' 'AnonPages:        119920 kB' 'Mapped:            50836 kB' 'Shmem:             10484 kB' 'KReclaimable:      67792 kB' 'Slab:             162888 kB' 'SReclaimable:      67792 kB' 'SUnreclaim:        95096 kB' 'KernelStack:        6448 kB' 'PageTables:         4340 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13459584 kB' 'Committed_AS:     312644 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55176 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.874    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.874    06:14:24	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.875    06:14:24	-- setup/common.sh@33 -- # echo 0
00:04:07.875    06:14:24	-- setup/common.sh@33 -- # return 0
00:04:07.875   06:14:24	-- setup/hugepages.sh@99 -- # surp=0
00:04:07.875    06:14:24	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:04:07.875    06:14:24	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:04:07.875    06:14:24	-- setup/common.sh@18 -- # local node=
00:04:07.875    06:14:24	-- setup/common.sh@19 -- # local var val
00:04:07.875    06:14:24	-- setup/common.sh@20 -- # local mem_f mem
00:04:07.875    06:14:24	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:07.875    06:14:24	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:07.875    06:14:24	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:07.875    06:14:24	-- setup/common.sh@28 -- # mapfile -t mem
00:04:07.875    06:14:24	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875     06:14:24	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         7943792 kB' 'MemAvailable:    9455472 kB' 'Buffers:            2684 kB' 'Cached:          1722736 kB' 'SwapCached:            0 kB' 'Active:           497696 kB' 'Inactive:        1345764 kB' 'Active(anon):     128524 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345764 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               340 kB' 'Writeback:             0 kB' 'AnonPages:        119580 kB' 'Mapped:            50764 kB' 'Shmem:             10484 kB' 'KReclaimable:      67792 kB' 'Slab:             162888 kB' 'SReclaimable:      67792 kB' 'SUnreclaim:        95096 kB' 'KernelStack:        6416 kB' 'PageTables:         4244 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13459584 kB' 'Committed_AS:     312644 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55176 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.875    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.875    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:07.876    06:14:24	-- setup/common.sh@33 -- # echo 0
00:04:07.876    06:14:24	-- setup/common.sh@33 -- # return 0
00:04:07.876   06:14:24	-- setup/hugepages.sh@100 -- # resv=0
00:04:07.876  nr_hugepages=1024
00:04:07.876   06:14:24	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1024
00:04:07.876   06:14:24	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:04:07.876  resv_hugepages=0
00:04:07.876  surplus_hugepages=0
00:04:07.876   06:14:24	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:04:07.876  anon_hugepages=0
00:04:07.876   06:14:24	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:04:07.876   06:14:24	-- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv ))
00:04:07.876   06:14:24	-- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages ))
00:04:07.876    06:14:24	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:04:07.876    06:14:24	-- setup/common.sh@17 -- # local get=HugePages_Total
00:04:07.876    06:14:24	-- setup/common.sh@18 -- # local node=
00:04:07.876    06:14:24	-- setup/common.sh@19 -- # local var val
00:04:07.876    06:14:24	-- setup/common.sh@20 -- # local mem_f mem
00:04:07.876    06:14:24	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:07.876    06:14:24	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:07.876    06:14:24	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:07.876    06:14:24	-- setup/common.sh@28 -- # mapfile -t mem
00:04:07.876    06:14:24	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876     06:14:24	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         7943792 kB' 'MemAvailable:    9455472 kB' 'Buffers:            2684 kB' 'Cached:          1722736 kB' 'SwapCached:            0 kB' 'Active:           497700 kB' 'Inactive:        1345764 kB' 'Active(anon):     128528 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345764 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               340 kB' 'Writeback:             0 kB' 'AnonPages:        119620 kB' 'Mapped:            50764 kB' 'Shmem:             10484 kB' 'KReclaimable:      67792 kB' 'Slab:             162888 kB' 'SReclaimable:      67792 kB' 'SUnreclaim:        95096 kB' 'KernelStack:        6448 kB' 'PageTables:         4340 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13459584 kB' 'Committed_AS:     312644 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55176 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.876    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.876    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.877    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.877    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:07.878    06:14:24	-- setup/common.sh@33 -- # echo 1024
00:04:07.878    06:14:24	-- setup/common.sh@33 -- # return 0
00:04:07.878   06:14:24	-- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv ))
00:04:07.878   06:14:24	-- setup/hugepages.sh@112 -- # get_nodes
00:04:07.878   06:14:24	-- setup/hugepages.sh@27 -- # local node
00:04:07.878   06:14:24	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:04:07.878   06:14:24	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024
00:04:07.878   06:14:24	-- setup/hugepages.sh@32 -- # no_nodes=1
00:04:07.878   06:14:24	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:04:07.878   06:14:24	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:04:07.878   06:14:24	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:04:07.878    06:14:24	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:04:07.878    06:14:24	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:07.878    06:14:24	-- setup/common.sh@18 -- # local node=0
00:04:07.878    06:14:24	-- setup/common.sh@19 -- # local var val
00:04:07.878    06:14:24	-- setup/common.sh@20 -- # local mem_f mem
00:04:07.878    06:14:24	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:07.878    06:14:24	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:04:07.878    06:14:24	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:04:07.878    06:14:24	-- setup/common.sh@28 -- # mapfile -t mem
00:04:07.878    06:14:24	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878     06:14:24	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         7943792 kB' 'MemUsed:         4295320 kB' 'SwapCached:            0 kB' 'Active:           497700 kB' 'Inactive:        1345764 kB' 'Active(anon):     128528 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345764 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'Dirty:               340 kB' 'Writeback:             0 kB' 'FilePages:       1725420 kB' 'Mapped:            50764 kB' 'AnonPages:        119620 kB' 'Shmem:             10484 kB' 'KernelStack:        6448 kB' 'PageTables:         4340 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:      67792 kB' 'Slab:             162888 kB' 'SReclaimable:      67792 kB' 'SUnreclaim:        95096 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:  1024' 'HugePages_Free:   1024' 'HugePages_Surp:      0'
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.878    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.878    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # continue
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # IFS=': '
00:04:07.879    06:14:24	-- setup/common.sh@31 -- # read -r var val _
00:04:07.879    06:14:24	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:07.879    06:14:24	-- setup/common.sh@33 -- # echo 0
00:04:07.879    06:14:24	-- setup/common.sh@33 -- # return 0
00:04:07.879   06:14:24	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:04:07.879   06:14:24	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:04:07.879   06:14:24	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:04:07.879   06:14:24	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:04:07.879  node0=1024 expecting 1024
00:04:07.879   06:14:24	-- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024'
00:04:07.879   06:14:24	-- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]]
00:04:07.879  
00:04:07.879  real	0m0.969s
00:04:07.879  user	0m0.462s
00:04:07.879  sys	0m0.446s
00:04:07.879   06:14:24	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:07.879   06:14:24	-- common/autotest_common.sh@10 -- # set +x
00:04:07.879  ************************************
00:04:07.879  END TEST default_setup
00:04:07.879  ************************************
00:04:08.138   06:14:24	-- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc
00:04:08.138   06:14:24	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:08.138   06:14:24	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:08.138   06:14:24	-- common/autotest_common.sh@10 -- # set +x
00:04:08.138  ************************************
00:04:08.138  START TEST per_node_1G_alloc
00:04:08.138  ************************************
00:04:08.138   06:14:24	-- common/autotest_common.sh@1114 -- # per_node_1G_alloc
00:04:08.138   06:14:24	-- setup/hugepages.sh@143 -- # local IFS=,
00:04:08.138   06:14:24	-- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0
00:04:08.138   06:14:24	-- setup/hugepages.sh@49 -- # local size=1048576
00:04:08.138   06:14:24	-- setup/hugepages.sh@50 -- # (( 2 > 1 ))
00:04:08.138   06:14:24	-- setup/hugepages.sh@51 -- # shift
00:04:08.138   06:14:24	-- setup/hugepages.sh@52 -- # node_ids=('0')
00:04:08.138   06:14:24	-- setup/hugepages.sh@52 -- # local node_ids
00:04:08.138   06:14:24	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:04:08.138   06:14:24	-- setup/hugepages.sh@57 -- # nr_hugepages=512
00:04:08.138   06:14:24	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0
00:04:08.138   06:14:24	-- setup/hugepages.sh@62 -- # user_nodes=('0')
00:04:08.138   06:14:24	-- setup/hugepages.sh@62 -- # local user_nodes
00:04:08.138   06:14:24	-- setup/hugepages.sh@64 -- # local _nr_hugepages=512
00:04:08.138   06:14:24	-- setup/hugepages.sh@65 -- # local _no_nodes=1
00:04:08.138   06:14:24	-- setup/hugepages.sh@67 -- # nodes_test=()
00:04:08.138   06:14:24	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:04:08.138   06:14:24	-- setup/hugepages.sh@69 -- # (( 1 > 0 ))
00:04:08.138   06:14:24	-- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}"
00:04:08.138   06:14:24	-- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512
00:04:08.138   06:14:24	-- setup/hugepages.sh@73 -- # return 0
00:04:08.138   06:14:24	-- setup/hugepages.sh@146 -- # NRHUGE=512
00:04:08.138   06:14:24	-- setup/hugepages.sh@146 -- # HUGENODE=0
00:04:08.138   06:14:24	-- setup/hugepages.sh@146 -- # setup output
00:04:08.138   06:14:24	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:08.138   06:14:24	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:04:08.398  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:04:08.398  0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver
00:04:08.398  0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver
00:04:08.398   06:14:25	-- setup/hugepages.sh@147 -- # nr_hugepages=512
00:04:08.398   06:14:25	-- setup/hugepages.sh@147 -- # verify_nr_hugepages
00:04:08.398   06:14:25	-- setup/hugepages.sh@89 -- # local node
00:04:08.398   06:14:25	-- setup/hugepages.sh@90 -- # local sorted_t
00:04:08.398   06:14:25	-- setup/hugepages.sh@91 -- # local sorted_s
00:04:08.398   06:14:25	-- setup/hugepages.sh@92 -- # local surp
00:04:08.398   06:14:25	-- setup/hugepages.sh@93 -- # local resv
00:04:08.398   06:14:25	-- setup/hugepages.sh@94 -- # local anon
00:04:08.398   06:14:25	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:04:08.398    06:14:25	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:04:08.398    06:14:25	-- setup/common.sh@17 -- # local get=AnonHugePages
00:04:08.398    06:14:25	-- setup/common.sh@18 -- # local node=
00:04:08.398    06:14:25	-- setup/common.sh@19 -- # local var val
00:04:08.398    06:14:25	-- setup/common.sh@20 -- # local mem_f mem
00:04:08.398    06:14:25	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:08.398    06:14:25	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:08.398    06:14:25	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:08.398    06:14:25	-- setup/common.sh@28 -- # mapfile -t mem
00:04:08.398    06:14:25	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398     06:14:25	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         8991088 kB' 'MemAvailable:   10502768 kB' 'Buffers:            2684 kB' 'Cached:          1722736 kB' 'SwapCached:            0 kB' 'Active:           497844 kB' 'Inactive:        1345764 kB' 'Active(anon):     128672 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345764 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               340 kB' 'Writeback:             0 kB' 'AnonPages:        119764 kB' 'Mapped:            50864 kB' 'Shmem:             10484 kB' 'KReclaimable:      67792 kB' 'Slab:             162896 kB' 'SReclaimable:      67792 kB' 'SUnreclaim:        95104 kB' 'KernelStack:        6456 kB' 'PageTables:         4452 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13983872 kB' 'Committed_AS:     312644 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55208 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:     512' 'HugePages_Free:      512' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         1048576 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.398    06:14:25	-- setup/common.sh@33 -- # echo 0
00:04:08.398    06:14:25	-- setup/common.sh@33 -- # return 0
00:04:08.398   06:14:25	-- setup/hugepages.sh@97 -- # anon=0
00:04:08.398    06:14:25	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:04:08.398    06:14:25	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:08.398    06:14:25	-- setup/common.sh@18 -- # local node=
00:04:08.398    06:14:25	-- setup/common.sh@19 -- # local var val
00:04:08.398    06:14:25	-- setup/common.sh@20 -- # local mem_f mem
00:04:08.398    06:14:25	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:08.398    06:14:25	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:08.398    06:14:25	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:08.398    06:14:25	-- setup/common.sh@28 -- # mapfile -t mem
00:04:08.398    06:14:25	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398     06:14:25	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         8991416 kB' 'MemAvailable:   10503096 kB' 'Buffers:            2684 kB' 'Cached:          1722736 kB' 'SwapCached:            0 kB' 'Active:           497924 kB' 'Inactive:        1345764 kB' 'Active(anon):     128752 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345764 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               340 kB' 'Writeback:             0 kB' 'AnonPages:        119888 kB' 'Mapped:            50764 kB' 'Shmem:             10484 kB' 'KReclaimable:      67792 kB' 'Slab:             162900 kB' 'SReclaimable:      67792 kB' 'SUnreclaim:        95108 kB' 'KernelStack:        6448 kB' 'PageTables:         4344 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13983872 kB' 'Committed_AS:     312644 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55176 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:     512' 'HugePages_Free:      512' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         1048576 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.398    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.398    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.399    06:14:25	-- setup/common.sh@33 -- # echo 0
00:04:08.399    06:14:25	-- setup/common.sh@33 -- # return 0
00:04:08.399   06:14:25	-- setup/hugepages.sh@99 -- # surp=0
00:04:08.399    06:14:25	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:04:08.399    06:14:25	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:04:08.399    06:14:25	-- setup/common.sh@18 -- # local node=
00:04:08.399    06:14:25	-- setup/common.sh@19 -- # local var val
00:04:08.399    06:14:25	-- setup/common.sh@20 -- # local mem_f mem
00:04:08.399    06:14:25	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:08.399    06:14:25	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:08.399    06:14:25	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:08.399    06:14:25	-- setup/common.sh@28 -- # mapfile -t mem
00:04:08.399    06:14:25	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399     06:14:25	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         8991416 kB' 'MemAvailable:   10503096 kB' 'Buffers:            2684 kB' 'Cached:          1722736 kB' 'SwapCached:            0 kB' 'Active:           498008 kB' 'Inactive:        1345764 kB' 'Active(anon):     128836 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345764 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               340 kB' 'Writeback:             0 kB' 'AnonPages:        119924 kB' 'Mapped:            50764 kB' 'Shmem:             10484 kB' 'KReclaimable:      67792 kB' 'Slab:             162900 kB' 'SReclaimable:      67792 kB' 'SUnreclaim:        95108 kB' 'KernelStack:        6448 kB' 'PageTables:         4340 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13983872 kB' 'Committed_AS:     312644 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55192 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:     512' 'HugePages_Free:      512' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         1048576 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.399    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.399    06:14:25	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:08.400    06:14:25	-- setup/common.sh@33 -- # echo 0
00:04:08.400    06:14:25	-- setup/common.sh@33 -- # return 0
00:04:08.400   06:14:25	-- setup/hugepages.sh@100 -- # resv=0
00:04:08.400   06:14:25	-- setup/hugepages.sh@102 -- # echo nr_hugepages=512
00:04:08.400  nr_hugepages=512
00:04:08.400   06:14:25	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:04:08.400  resv_hugepages=0
00:04:08.400   06:14:25	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:04:08.400  surplus_hugepages=0
00:04:08.400  anon_hugepages=0
00:04:08.400   06:14:25	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:04:08.400   06:14:25	-- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv ))
00:04:08.400   06:14:25	-- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages ))
00:04:08.400    06:14:25	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:04:08.400    06:14:25	-- setup/common.sh@17 -- # local get=HugePages_Total
00:04:08.400    06:14:25	-- setup/common.sh@18 -- # local node=
00:04:08.400    06:14:25	-- setup/common.sh@19 -- # local var val
00:04:08.400    06:14:25	-- setup/common.sh@20 -- # local mem_f mem
00:04:08.400    06:14:25	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:08.400    06:14:25	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:08.400    06:14:25	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:08.400    06:14:25	-- setup/common.sh@28 -- # mapfile -t mem
00:04:08.400    06:14:25	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400     06:14:25	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         8991416 kB' 'MemAvailable:   10503096 kB' 'Buffers:            2684 kB' 'Cached:          1722736 kB' 'SwapCached:            0 kB' 'Active:           497972 kB' 'Inactive:        1345764 kB' 'Active(anon):     128800 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345764 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               340 kB' 'Writeback:             0 kB' 'AnonPages:        119920 kB' 'Mapped:            50764 kB' 'Shmem:             10484 kB' 'KReclaimable:      67792 kB' 'Slab:             162900 kB' 'SReclaimable:      67792 kB' 'SUnreclaim:        95108 kB' 'KernelStack:        6464 kB' 'PageTables:         4392 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13983872 kB' 'Committed_AS:     312644 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55208 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:     512' 'HugePages_Free:      512' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         1048576 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.400    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.400    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:08.659    06:14:25	-- setup/common.sh@33 -- # echo 512
00:04:08.659    06:14:25	-- setup/common.sh@33 -- # return 0
00:04:08.659   06:14:25	-- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv ))
00:04:08.659   06:14:25	-- setup/hugepages.sh@112 -- # get_nodes
00:04:08.659   06:14:25	-- setup/hugepages.sh@27 -- # local node
00:04:08.659   06:14:25	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:04:08.659   06:14:25	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512
00:04:08.659   06:14:25	-- setup/hugepages.sh@32 -- # no_nodes=1
00:04:08.659   06:14:25	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:04:08.659   06:14:25	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:04:08.659   06:14:25	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:04:08.659    06:14:25	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:04:08.659    06:14:25	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:08.659    06:14:25	-- setup/common.sh@18 -- # local node=0
00:04:08.659    06:14:25	-- setup/common.sh@19 -- # local var val
00:04:08.659    06:14:25	-- setup/common.sh@20 -- # local mem_f mem
00:04:08.659    06:14:25	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:08.659    06:14:25	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:04:08.659    06:14:25	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:04:08.659    06:14:25	-- setup/common.sh@28 -- # mapfile -t mem
00:04:08.659    06:14:25	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659     06:14:25	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         8991416 kB' 'MemUsed:         3247696 kB' 'SwapCached:            0 kB' 'Active:           498028 kB' 'Inactive:        1345764 kB' 'Active(anon):     128856 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345764 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'Dirty:               340 kB' 'Writeback:             0 kB' 'FilePages:       1725420 kB' 'Mapped:            50764 kB' 'AnonPages:        119940 kB' 'Shmem:             10484 kB' 'KernelStack:        6448 kB' 'PageTables:         4340 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:      67792 kB' 'Slab:             162892 kB' 'SReclaimable:      67792 kB' 'SUnreclaim:        95100 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:   512' 'HugePages_Free:    512' 'HugePages_Surp:      0'
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.659    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.659    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.660    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.660    06:14:25	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.660    06:14:25	-- setup/common.sh@33 -- # echo 0
00:04:08.660    06:14:25	-- setup/common.sh@33 -- # return 0
00:04:08.660  node0=512 expecting 512
00:04:08.660  ************************************
00:04:08.660  END TEST per_node_1G_alloc
00:04:08.660  ************************************
00:04:08.660   06:14:25	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:04:08.660   06:14:25	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:04:08.660   06:14:25	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:04:08.660   06:14:25	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:04:08.660   06:14:25	-- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512'
00:04:08.660   06:14:25	-- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]]
00:04:08.660  
00:04:08.660  real	0m0.556s
00:04:08.660  user	0m0.264s
00:04:08.660  sys	0m0.307s
00:04:08.660   06:14:25	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:08.660   06:14:25	-- common/autotest_common.sh@10 -- # set +x
00:04:08.660   06:14:25	-- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc
00:04:08.660   06:14:25	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:08.660   06:14:25	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:08.660   06:14:25	-- common/autotest_common.sh@10 -- # set +x
00:04:08.660  ************************************
00:04:08.660  START TEST even_2G_alloc
00:04:08.660  ************************************
00:04:08.660   06:14:25	-- common/autotest_common.sh@1114 -- # even_2G_alloc
00:04:08.660   06:14:25	-- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152
00:04:08.660   06:14:25	-- setup/hugepages.sh@49 -- # local size=2097152
00:04:08.660   06:14:25	-- setup/hugepages.sh@50 -- # (( 1 > 1 ))
00:04:08.660   06:14:25	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:04:08.660   06:14:25	-- setup/hugepages.sh@57 -- # nr_hugepages=1024
00:04:08.660   06:14:25	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node
00:04:08.660   06:14:25	-- setup/hugepages.sh@62 -- # user_nodes=()
00:04:08.660   06:14:25	-- setup/hugepages.sh@62 -- # local user_nodes
00:04:08.660   06:14:25	-- setup/hugepages.sh@64 -- # local _nr_hugepages=1024
00:04:08.660   06:14:25	-- setup/hugepages.sh@65 -- # local _no_nodes=1
00:04:08.660   06:14:25	-- setup/hugepages.sh@67 -- # nodes_test=()
00:04:08.660   06:14:25	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:04:08.660   06:14:25	-- setup/hugepages.sh@69 -- # (( 0 > 0 ))
00:04:08.660   06:14:25	-- setup/hugepages.sh@74 -- # (( 0 > 0 ))
00:04:08.660   06:14:25	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:04:08.660   06:14:25	-- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024
00:04:08.660   06:14:25	-- setup/hugepages.sh@83 -- # : 0
00:04:08.660   06:14:25	-- setup/hugepages.sh@84 -- # : 0
00:04:08.660   06:14:25	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:04:08.660   06:14:25	-- setup/hugepages.sh@153 -- # NRHUGE=1024
00:04:08.660   06:14:25	-- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes
00:04:08.660   06:14:25	-- setup/hugepages.sh@153 -- # setup output
00:04:08.660   06:14:25	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:08.660   06:14:25	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:04:08.920  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:04:08.920  0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver
00:04:08.920  0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver
00:04:08.920   06:14:25	-- setup/hugepages.sh@154 -- # verify_nr_hugepages
00:04:08.920   06:14:25	-- setup/hugepages.sh@89 -- # local node
00:04:08.920   06:14:25	-- setup/hugepages.sh@90 -- # local sorted_t
00:04:08.920   06:14:25	-- setup/hugepages.sh@91 -- # local sorted_s
00:04:08.920   06:14:25	-- setup/hugepages.sh@92 -- # local surp
00:04:08.920   06:14:25	-- setup/hugepages.sh@93 -- # local resv
00:04:08.920   06:14:25	-- setup/hugepages.sh@94 -- # local anon
00:04:08.920   06:14:25	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:04:08.920    06:14:25	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:04:08.920    06:14:25	-- setup/common.sh@17 -- # local get=AnonHugePages
00:04:08.920    06:14:25	-- setup/common.sh@18 -- # local node=
00:04:08.920    06:14:25	-- setup/common.sh@19 -- # local var val
00:04:08.920    06:14:25	-- setup/common.sh@20 -- # local mem_f mem
00:04:08.920    06:14:25	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:08.920    06:14:25	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:08.920    06:14:25	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:08.920    06:14:25	-- setup/common.sh@28 -- # mapfile -t mem
00:04:08.920    06:14:25	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:08.920    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.920    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.920     06:14:25	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         7943764 kB' 'MemAvailable:    9455444 kB' 'Buffers:            2684 kB' 'Cached:          1722736 kB' 'SwapCached:            0 kB' 'Active:           498344 kB' 'Inactive:        1345764 kB' 'Active(anon):     129172 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345764 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               340 kB' 'Writeback:             0 kB' 'AnonPages:        120260 kB' 'Mapped:            50892 kB' 'Shmem:             10484 kB' 'KReclaimable:      67792 kB' 'Slab:             162896 kB' 'SReclaimable:      67792 kB' 'SUnreclaim:        95104 kB' 'KernelStack:        6472 kB' 'PageTables:         4312 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13459584 kB' 'Committed_AS:     312644 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55192 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:08.920    06:14:25	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.920    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.920    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.920    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.920    06:14:25	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.920    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.920    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.920    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.920    06:14:25	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.920    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.920    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.920    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.920    06:14:25	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.920    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.920    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.920    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.920    06:14:25	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.920    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.920    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.920    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.920    06:14:25	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.920    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.920    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.920    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.920    06:14:25	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.920    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.920    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.920    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.920    06:14:25	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.920    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.920    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.920    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.920    06:14:25	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.920    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.920    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.920    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.920    06:14:25	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.920    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.920    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.920    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.920    06:14:25	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.920    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.921    06:14:25	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:08.921    06:14:25	-- setup/common.sh@33 -- # echo 0
00:04:08.921    06:14:25	-- setup/common.sh@33 -- # return 0
00:04:08.921   06:14:25	-- setup/hugepages.sh@97 -- # anon=0
00:04:08.921    06:14:25	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:04:08.921    06:14:25	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:08.921    06:14:25	-- setup/common.sh@18 -- # local node=
00:04:08.921    06:14:25	-- setup/common.sh@19 -- # local var val
00:04:08.921    06:14:25	-- setup/common.sh@20 -- # local mem_f mem
00:04:08.921    06:14:25	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:08.921    06:14:25	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:08.921    06:14:25	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:08.921    06:14:25	-- setup/common.sh@28 -- # mapfile -t mem
00:04:08.921    06:14:25	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:08.921    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.922     06:14:25	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         7944016 kB' 'MemAvailable:    9455692 kB' 'Buffers:            2684 kB' 'Cached:          1722736 kB' 'SwapCached:            0 kB' 'Active:           497936 kB' 'Inactive:        1345764 kB' 'Active(anon):     128764 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345764 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               340 kB' 'Writeback:             0 kB' 'AnonPages:        119848 kB' 'Mapped:            50764 kB' 'Shmem:             10484 kB' 'KReclaimable:      67780 kB' 'Slab:             162884 kB' 'SReclaimable:      67780 kB' 'SUnreclaim:        95104 kB' 'KernelStack:        6464 kB' 'PageTables:         4392 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13459584 kB' 'Committed_AS:     312644 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55176 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:08.922    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.922    06:14:25	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.922    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.922    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.922    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.922    06:14:25	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.922    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.922    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.922    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.922    06:14:25	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.922    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.922    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.922    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.922    06:14:25	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.922    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.922    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.922    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.922    06:14:25	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.922    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.922    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.922    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.922    06:14:25	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.922    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.922    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.922    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.922    06:14:25	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.922    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.922    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.922    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.922    06:14:25	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.922    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.922    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.922    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.922    06:14:25	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.922    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.922    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.922    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.922    06:14:25	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.922    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.922    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.922    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:08.922    06:14:25	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:08.922    06:14:25	-- setup/common.sh@32 -- # continue
00:04:08.922    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:08.922    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.183    06:14:25	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.183    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.183    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.183    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.183    06:14:25	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.183    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.183    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.183    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.183    06:14:25	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.183    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.183    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.183    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.183    06:14:25	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.183    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.183    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.183    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.183    06:14:25	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.183    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.183    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.183    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.183    06:14:25	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.183    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.183    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.183    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.183    06:14:25	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.183    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.183    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.184    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.184    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.185    06:14:25	-- setup/common.sh@33 -- # echo 0
00:04:09.185    06:14:25	-- setup/common.sh@33 -- # return 0
00:04:09.185   06:14:25	-- setup/hugepages.sh@99 -- # surp=0
00:04:09.185    06:14:25	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:04:09.185    06:14:25	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:04:09.185    06:14:25	-- setup/common.sh@18 -- # local node=
00:04:09.185    06:14:25	-- setup/common.sh@19 -- # local var val
00:04:09.185    06:14:25	-- setup/common.sh@20 -- # local mem_f mem
00:04:09.185    06:14:25	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:09.185    06:14:25	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:09.185    06:14:25	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:09.185    06:14:25	-- setup/common.sh@28 -- # mapfile -t mem
00:04:09.185    06:14:25	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.185     06:14:25	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         7944016 kB' 'MemAvailable:    9455692 kB' 'Buffers:            2684 kB' 'Cached:          1722736 kB' 'SwapCached:            0 kB' 'Active:           498004 kB' 'Inactive:        1345764 kB' 'Active(anon):     128832 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345764 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               340 kB' 'Writeback:             0 kB' 'AnonPages:        119948 kB' 'Mapped:            50764 kB' 'Shmem:             10484 kB' 'KReclaimable:      67780 kB' 'Slab:             162884 kB' 'SReclaimable:      67780 kB' 'SUnreclaim:        95104 kB' 'KernelStack:        6464 kB' 'PageTables:         4400 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13459584 kB' 'Committed_AS:     312644 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55176 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.185    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.185    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.186    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.186    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.187    06:14:25	-- setup/common.sh@33 -- # echo 0
00:04:09.187    06:14:25	-- setup/common.sh@33 -- # return 0
00:04:09.187   06:14:25	-- setup/hugepages.sh@100 -- # resv=0
00:04:09.187  nr_hugepages=1024
00:04:09.187  resv_hugepages=0
00:04:09.187  surplus_hugepages=0
00:04:09.187  anon_hugepages=0
00:04:09.187   06:14:25	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1024
00:04:09.187   06:14:25	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:04:09.187   06:14:25	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:04:09.187   06:14:25	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:04:09.187   06:14:25	-- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv ))
00:04:09.187   06:14:25	-- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages ))
00:04:09.187    06:14:25	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:04:09.187    06:14:25	-- setup/common.sh@17 -- # local get=HugePages_Total
00:04:09.187    06:14:25	-- setup/common.sh@18 -- # local node=
00:04:09.187    06:14:25	-- setup/common.sh@19 -- # local var val
00:04:09.187    06:14:25	-- setup/common.sh@20 -- # local mem_f mem
00:04:09.187    06:14:25	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:09.187    06:14:25	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:09.187    06:14:25	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:09.187    06:14:25	-- setup/common.sh@28 -- # mapfile -t mem
00:04:09.187    06:14:25	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.187     06:14:25	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         7944016 kB' 'MemAvailable:    9455692 kB' 'Buffers:            2684 kB' 'Cached:          1722736 kB' 'SwapCached:            0 kB' 'Active:           498032 kB' 'Inactive:        1345764 kB' 'Active(anon):     128860 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345764 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               340 kB' 'Writeback:             0 kB' 'AnonPages:        119944 kB' 'Mapped:            50764 kB' 'Shmem:             10484 kB' 'KReclaimable:      67780 kB' 'Slab:             162884 kB' 'SReclaimable:      67780 kB' 'SUnreclaim:        95104 kB' 'KernelStack:        6448 kB' 'PageTables:         4348 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13459584 kB' 'Committed_AS:     312644 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55176 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.187    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.187    06:14:25	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:25	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:25	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:26	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:26	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:26	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:26	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:26	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.188    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.188    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.188    06:14:26	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.188    06:14:26	-- setup/common.sh@33 -- # echo 1024
00:04:09.188    06:14:26	-- setup/common.sh@33 -- # return 0
00:04:09.188   06:14:26	-- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv ))
00:04:09.189   06:14:26	-- setup/hugepages.sh@112 -- # get_nodes
00:04:09.189   06:14:26	-- setup/hugepages.sh@27 -- # local node
00:04:09.189   06:14:26	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:04:09.189   06:14:26	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024
00:04:09.189   06:14:26	-- setup/hugepages.sh@32 -- # no_nodes=1
00:04:09.189   06:14:26	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:04:09.189   06:14:26	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:04:09.189   06:14:26	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:04:09.189    06:14:26	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:04:09.189    06:14:26	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:09.189    06:14:26	-- setup/common.sh@18 -- # local node=0
00:04:09.189    06:14:26	-- setup/common.sh@19 -- # local var val
00:04:09.189    06:14:26	-- setup/common.sh@20 -- # local mem_f mem
00:04:09.189    06:14:26	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:09.189    06:14:26	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:04:09.189    06:14:26	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:04:09.189    06:14:26	-- setup/common.sh@28 -- # mapfile -t mem
00:04:09.189    06:14:26	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.189     06:14:26	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         7944016 kB' 'MemUsed:         4295096 kB' 'SwapCached:            0 kB' 'Active:           498044 kB' 'Inactive:        1345764 kB' 'Active(anon):     128872 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345764 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'Dirty:               340 kB' 'Writeback:             0 kB' 'FilePages:       1725420 kB' 'Mapped:            50764 kB' 'AnonPages:        119960 kB' 'Shmem:             10484 kB' 'KernelStack:        6464 kB' 'PageTables:         4400 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:      67780 kB' 'Slab:             162884 kB' 'SReclaimable:      67780 kB' 'SUnreclaim:        95104 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:  1024' 'HugePages_Free:   1024' 'HugePages_Surp:      0'
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.189    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.189    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.190    06:14:26	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.190    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.190    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.190    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.190    06:14:26	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.190    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.190    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.190    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.190    06:14:26	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.190    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.190    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.190    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.190    06:14:26	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.190    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.190    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.190    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.190    06:14:26	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.190    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.190    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.190    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.190    06:14:26	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.190    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.190    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.190    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.190    06:14:26	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.190    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.190    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.190    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.190    06:14:26	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.190    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.190    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.190    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.190    06:14:26	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.190    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.190    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.190    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.190    06:14:26	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.190    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.190    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.190    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.190    06:14:26	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.190    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.190    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.190    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.190    06:14:26	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.190    06:14:26	-- setup/common.sh@33 -- # echo 0
00:04:09.190    06:14:26	-- setup/common.sh@33 -- # return 0
00:04:09.190   06:14:26	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:04:09.190   06:14:26	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:04:09.190   06:14:26	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:04:09.190   06:14:26	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:04:09.190  node0=1024 expecting 1024
00:04:09.190  ************************************
00:04:09.190  END TEST even_2G_alloc
00:04:09.190  ************************************
00:04:09.190   06:14:26	-- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024'
00:04:09.190   06:14:26	-- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]]
00:04:09.190  
00:04:09.190  real	0m0.579s
00:04:09.190  user	0m0.259s
00:04:09.190  sys	0m0.310s
00:04:09.190   06:14:26	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:09.190   06:14:26	-- common/autotest_common.sh@10 -- # set +x
00:04:09.190   06:14:26	-- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc
00:04:09.190   06:14:26	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:09.190   06:14:26	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:09.190   06:14:26	-- common/autotest_common.sh@10 -- # set +x
00:04:09.190  ************************************
00:04:09.190  START TEST odd_alloc
00:04:09.190  ************************************
00:04:09.190   06:14:26	-- common/autotest_common.sh@1114 -- # odd_alloc
00:04:09.190   06:14:26	-- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176
00:04:09.190   06:14:26	-- setup/hugepages.sh@49 -- # local size=2098176
00:04:09.190   06:14:26	-- setup/hugepages.sh@50 -- # (( 1 > 1 ))
00:04:09.190   06:14:26	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:04:09.190   06:14:26	-- setup/hugepages.sh@57 -- # nr_hugepages=1025
00:04:09.190   06:14:26	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node
00:04:09.190   06:14:26	-- setup/hugepages.sh@62 -- # user_nodes=()
00:04:09.190   06:14:26	-- setup/hugepages.sh@62 -- # local user_nodes
00:04:09.190   06:14:26	-- setup/hugepages.sh@64 -- # local _nr_hugepages=1025
00:04:09.190   06:14:26	-- setup/hugepages.sh@65 -- # local _no_nodes=1
00:04:09.190   06:14:26	-- setup/hugepages.sh@67 -- # nodes_test=()
00:04:09.190   06:14:26	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:04:09.190   06:14:26	-- setup/hugepages.sh@69 -- # (( 0 > 0 ))
00:04:09.190   06:14:26	-- setup/hugepages.sh@74 -- # (( 0 > 0 ))
00:04:09.190   06:14:26	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:04:09.190   06:14:26	-- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025
00:04:09.190   06:14:26	-- setup/hugepages.sh@83 -- # : 0
00:04:09.190   06:14:26	-- setup/hugepages.sh@84 -- # : 0
00:04:09.190   06:14:26	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:04:09.190   06:14:26	-- setup/hugepages.sh@160 -- # HUGEMEM=2049
00:04:09.190   06:14:26	-- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes
00:04:09.190   06:14:26	-- setup/hugepages.sh@160 -- # setup output
00:04:09.190   06:14:26	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:09.190   06:14:26	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:04:09.449  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:04:09.711  0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver
00:04:09.711  0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver
00:04:09.711   06:14:26	-- setup/hugepages.sh@161 -- # verify_nr_hugepages
00:04:09.711   06:14:26	-- setup/hugepages.sh@89 -- # local node
00:04:09.711   06:14:26	-- setup/hugepages.sh@90 -- # local sorted_t
00:04:09.711   06:14:26	-- setup/hugepages.sh@91 -- # local sorted_s
00:04:09.711   06:14:26	-- setup/hugepages.sh@92 -- # local surp
00:04:09.711   06:14:26	-- setup/hugepages.sh@93 -- # local resv
00:04:09.711   06:14:26	-- setup/hugepages.sh@94 -- # local anon
00:04:09.711   06:14:26	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:04:09.711    06:14:26	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:04:09.711    06:14:26	-- setup/common.sh@17 -- # local get=AnonHugePages
00:04:09.711    06:14:26	-- setup/common.sh@18 -- # local node=
00:04:09.711    06:14:26	-- setup/common.sh@19 -- # local var val
00:04:09.711    06:14:26	-- setup/common.sh@20 -- # local mem_f mem
00:04:09.711    06:14:26	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:09.711    06:14:26	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:09.711    06:14:26	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:09.711    06:14:26	-- setup/common.sh@28 -- # mapfile -t mem
00:04:09.711    06:14:26	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.711     06:14:26	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         7950436 kB' 'MemAvailable:    9462112 kB' 'Buffers:            2684 kB' 'Cached:          1722736 kB' 'SwapCached:            0 kB' 'Active:           498348 kB' 'Inactive:        1345764 kB' 'Active(anon):     129176 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345764 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               132 kB' 'Writeback:             0 kB' 'AnonPages:        120300 kB' 'Mapped:            50936 kB' 'Shmem:             10484 kB' 'KReclaimable:      67780 kB' 'Slab:             162856 kB' 'SReclaimable:      67780 kB' 'SUnreclaim:        95076 kB' 'KernelStack:        6488 kB' 'PageTables:         4484 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13458560 kB' 'Committed_AS:     312644 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55208 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1025' 'HugePages_Free:     1025' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2099200 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.711    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.711    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:09.712    06:14:26	-- setup/common.sh@33 -- # echo 0
00:04:09.712    06:14:26	-- setup/common.sh@33 -- # return 0
00:04:09.712   06:14:26	-- setup/hugepages.sh@97 -- # anon=0
00:04:09.712    06:14:26	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:04:09.712    06:14:26	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:09.712    06:14:26	-- setup/common.sh@18 -- # local node=
00:04:09.712    06:14:26	-- setup/common.sh@19 -- # local var val
00:04:09.712    06:14:26	-- setup/common.sh@20 -- # local mem_f mem
00:04:09.712    06:14:26	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:09.712    06:14:26	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:09.712    06:14:26	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:09.712    06:14:26	-- setup/common.sh@28 -- # mapfile -t mem
00:04:09.712    06:14:26	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712     06:14:26	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         7950436 kB' 'MemAvailable:    9462112 kB' 'Buffers:            2684 kB' 'Cached:          1722736 kB' 'SwapCached:            0 kB' 'Active:           497944 kB' 'Inactive:        1345764 kB' 'Active(anon):     128772 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345764 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               132 kB' 'Writeback:             0 kB' 'AnonPages:        119936 kB' 'Mapped:            50764 kB' 'Shmem:             10484 kB' 'KReclaimable:      67780 kB' 'Slab:             162900 kB' 'SReclaimable:      67780 kB' 'SUnreclaim:        95120 kB' 'KernelStack:        6464 kB' 'PageTables:         4396 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13458560 kB' 'Committed_AS:     312644 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55176 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1025' 'HugePages_Free:     1025' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2099200 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.712    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.712    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.713    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.713    06:14:26	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.713    06:14:26	-- setup/common.sh@33 -- # echo 0
00:04:09.713    06:14:26	-- setup/common.sh@33 -- # return 0
00:04:09.713   06:14:26	-- setup/hugepages.sh@99 -- # surp=0
00:04:09.713    06:14:26	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:04:09.713    06:14:26	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:04:09.713    06:14:26	-- setup/common.sh@18 -- # local node=
00:04:09.713    06:14:26	-- setup/common.sh@19 -- # local var val
00:04:09.713    06:14:26	-- setup/common.sh@20 -- # local mem_f mem
00:04:09.714    06:14:26	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:09.714    06:14:26	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:09.714    06:14:26	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:09.714    06:14:26	-- setup/common.sh@28 -- # mapfile -t mem
00:04:09.714    06:14:26	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714     06:14:26	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         7950436 kB' 'MemAvailable:    9462112 kB' 'Buffers:            2684 kB' 'Cached:          1722736 kB' 'SwapCached:            0 kB' 'Active:           497904 kB' 'Inactive:        1345764 kB' 'Active(anon):     128732 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345764 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               132 kB' 'Writeback:             0 kB' 'AnonPages:        119840 kB' 'Mapped:            50764 kB' 'Shmem:             10484 kB' 'KReclaimable:      67780 kB' 'Slab:             162896 kB' 'SReclaimable:      67780 kB' 'SUnreclaim:        95116 kB' 'KernelStack:        6448 kB' 'PageTables:         4344 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13458560 kB' 'Committed_AS:     312644 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55192 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1025' 'HugePages_Free:     1025' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2099200 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.714    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.714    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:09.715    06:14:26	-- setup/common.sh@33 -- # echo 0
00:04:09.715    06:14:26	-- setup/common.sh@33 -- # return 0
00:04:09.715  nr_hugepages=1025
00:04:09.715  resv_hugepages=0
00:04:09.715  surplus_hugepages=0
00:04:09.715  anon_hugepages=0
00:04:09.715   06:14:26	-- setup/hugepages.sh@100 -- # resv=0
00:04:09.715   06:14:26	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1025
00:04:09.715   06:14:26	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:04:09.715   06:14:26	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:04:09.715   06:14:26	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:04:09.715   06:14:26	-- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv ))
00:04:09.715   06:14:26	-- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages ))
00:04:09.715    06:14:26	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:04:09.715    06:14:26	-- setup/common.sh@17 -- # local get=HugePages_Total
00:04:09.715    06:14:26	-- setup/common.sh@18 -- # local node=
00:04:09.715    06:14:26	-- setup/common.sh@19 -- # local var val
00:04:09.715    06:14:26	-- setup/common.sh@20 -- # local mem_f mem
00:04:09.715    06:14:26	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:09.715    06:14:26	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:09.715    06:14:26	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:09.715    06:14:26	-- setup/common.sh@28 -- # mapfile -t mem
00:04:09.715    06:14:26	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.715     06:14:26	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         7951108 kB' 'MemAvailable:    9462784 kB' 'Buffers:            2684 kB' 'Cached:          1722736 kB' 'SwapCached:            0 kB' 'Active:           497892 kB' 'Inactive:        1345764 kB' 'Active(anon):     128720 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345764 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               132 kB' 'Writeback:             0 kB' 'AnonPages:        119832 kB' 'Mapped:            50764 kB' 'Shmem:             10484 kB' 'KReclaimable:      67780 kB' 'Slab:             162896 kB' 'SReclaimable:      67780 kB' 'SUnreclaim:        95116 kB' 'KernelStack:        6448 kB' 'PageTables:         4344 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13458560 kB' 'Committed_AS:     312644 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55192 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1025' 'HugePages_Free:     1025' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2099200 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.715    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.715    06:14:26	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.716    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.716    06:14:26	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:09.716    06:14:26	-- setup/common.sh@33 -- # echo 1025
00:04:09.716    06:14:26	-- setup/common.sh@33 -- # return 0
00:04:09.716   06:14:26	-- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv ))
00:04:09.716   06:14:26	-- setup/hugepages.sh@112 -- # get_nodes
00:04:09.716   06:14:26	-- setup/hugepages.sh@27 -- # local node
00:04:09.716   06:14:26	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:04:09.716   06:14:26	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025
00:04:09.716   06:14:26	-- setup/hugepages.sh@32 -- # no_nodes=1
00:04:09.716   06:14:26	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:04:09.716   06:14:26	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:04:09.716   06:14:26	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:04:09.716    06:14:26	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:04:09.716    06:14:26	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:09.716    06:14:26	-- setup/common.sh@18 -- # local node=0
00:04:09.716    06:14:26	-- setup/common.sh@19 -- # local var val
00:04:09.716    06:14:26	-- setup/common.sh@20 -- # local mem_f mem
00:04:09.716    06:14:26	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:09.716    06:14:26	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:04:09.717    06:14:26	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:04:09.717    06:14:26	-- setup/common.sh@28 -- # mapfile -t mem
00:04:09.717    06:14:26	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717     06:14:26	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         7951368 kB' 'MemUsed:         4287744 kB' 'SwapCached:            0 kB' 'Active:           497688 kB' 'Inactive:        1345764 kB' 'Active(anon):     128516 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345764 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'Dirty:               132 kB' 'Writeback:             0 kB' 'FilePages:       1725420 kB' 'Mapped:            50764 kB' 'AnonPages:        119624 kB' 'Shmem:             10484 kB' 'KernelStack:        6496 kB' 'PageTables:         4504 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:      67780 kB' 'Slab:             162892 kB' 'SReclaimable:      67780 kB' 'SUnreclaim:        95112 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:  1025' 'HugePages_Free:   1025' 'HugePages_Surp:      0'
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # continue
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # IFS=': '
00:04:09.717    06:14:26	-- setup/common.sh@31 -- # read -r var val _
00:04:09.717    06:14:26	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:09.717    06:14:26	-- setup/common.sh@33 -- # echo 0
00:04:09.717    06:14:26	-- setup/common.sh@33 -- # return 0
00:04:09.717   06:14:26	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:04:09.717   06:14:26	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:04:09.717   06:14:26	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:04:09.717   06:14:26	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:04:09.718  node0=1025 expecting 1025
00:04:09.718  ************************************
00:04:09.718  END TEST odd_alloc
00:04:09.718  ************************************
00:04:09.718   06:14:26	-- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025'
00:04:09.718   06:14:26	-- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]]
00:04:09.718  
00:04:09.718  real	0m0.567s
00:04:09.718  user	0m0.272s
00:04:09.718  sys	0m0.295s
00:04:09.718   06:14:26	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:09.718   06:14:26	-- common/autotest_common.sh@10 -- # set +x
00:04:09.976   06:14:26	-- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc
00:04:09.976   06:14:26	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:09.976   06:14:26	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:09.976   06:14:26	-- common/autotest_common.sh@10 -- # set +x
00:04:09.976  ************************************
00:04:09.976  START TEST custom_alloc
00:04:09.976  ************************************
00:04:09.976   06:14:26	-- common/autotest_common.sh@1114 -- # custom_alloc
00:04:09.976   06:14:26	-- setup/hugepages.sh@167 -- # local IFS=,
00:04:09.976   06:14:26	-- setup/hugepages.sh@169 -- # local node
00:04:09.976   06:14:26	-- setup/hugepages.sh@170 -- # nodes_hp=()
00:04:09.976   06:14:26	-- setup/hugepages.sh@170 -- # local nodes_hp
00:04:09.976   06:14:26	-- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0
00:04:09.976   06:14:26	-- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576
00:04:09.976   06:14:26	-- setup/hugepages.sh@49 -- # local size=1048576
00:04:09.976   06:14:26	-- setup/hugepages.sh@50 -- # (( 1 > 1 ))
00:04:09.976   06:14:26	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:04:09.976   06:14:26	-- setup/hugepages.sh@57 -- # nr_hugepages=512
00:04:09.976   06:14:26	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node
00:04:09.976   06:14:26	-- setup/hugepages.sh@62 -- # user_nodes=()
00:04:09.976   06:14:26	-- setup/hugepages.sh@62 -- # local user_nodes
00:04:09.976   06:14:26	-- setup/hugepages.sh@64 -- # local _nr_hugepages=512
00:04:09.976   06:14:26	-- setup/hugepages.sh@65 -- # local _no_nodes=1
00:04:09.976   06:14:26	-- setup/hugepages.sh@67 -- # nodes_test=()
00:04:09.976   06:14:26	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:04:09.976   06:14:26	-- setup/hugepages.sh@69 -- # (( 0 > 0 ))
00:04:09.976   06:14:26	-- setup/hugepages.sh@74 -- # (( 0 > 0 ))
00:04:09.976   06:14:26	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:04:09.976   06:14:26	-- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512
00:04:09.976   06:14:26	-- setup/hugepages.sh@83 -- # : 0
00:04:09.976   06:14:26	-- setup/hugepages.sh@84 -- # : 0
00:04:09.976   06:14:26	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:04:09.976   06:14:26	-- setup/hugepages.sh@175 -- # nodes_hp[0]=512
00:04:09.976   06:14:26	-- setup/hugepages.sh@176 -- # (( 1 > 1 ))
00:04:09.976   06:14:26	-- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}"
00:04:09.976   06:14:26	-- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}")
00:04:09.976   06:14:26	-- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] ))
00:04:09.976   06:14:26	-- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node
00:04:09.976   06:14:26	-- setup/hugepages.sh@62 -- # user_nodes=()
00:04:09.976   06:14:26	-- setup/hugepages.sh@62 -- # local user_nodes
00:04:09.976   06:14:26	-- setup/hugepages.sh@64 -- # local _nr_hugepages=512
00:04:09.976   06:14:26	-- setup/hugepages.sh@65 -- # local _no_nodes=1
00:04:09.976   06:14:26	-- setup/hugepages.sh@67 -- # nodes_test=()
00:04:09.976   06:14:26	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:04:09.976   06:14:26	-- setup/hugepages.sh@69 -- # (( 0 > 0 ))
00:04:09.976   06:14:26	-- setup/hugepages.sh@74 -- # (( 1 > 0 ))
00:04:09.976   06:14:26	-- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}"
00:04:09.976   06:14:26	-- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512
00:04:09.976   06:14:26	-- setup/hugepages.sh@78 -- # return 0
00:04:09.976   06:14:26	-- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512'
00:04:09.976   06:14:26	-- setup/hugepages.sh@187 -- # setup output
00:04:09.976   06:14:26	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:09.976   06:14:26	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:04:10.238  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:04:10.238  0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver
00:04:10.238  0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver
00:04:10.238   06:14:27	-- setup/hugepages.sh@188 -- # nr_hugepages=512
00:04:10.238   06:14:27	-- setup/hugepages.sh@188 -- # verify_nr_hugepages
00:04:10.238   06:14:27	-- setup/hugepages.sh@89 -- # local node
00:04:10.238   06:14:27	-- setup/hugepages.sh@90 -- # local sorted_t
00:04:10.238   06:14:27	-- setup/hugepages.sh@91 -- # local sorted_s
00:04:10.238   06:14:27	-- setup/hugepages.sh@92 -- # local surp
00:04:10.238   06:14:27	-- setup/hugepages.sh@93 -- # local resv
00:04:10.238   06:14:27	-- setup/hugepages.sh@94 -- # local anon
00:04:10.238   06:14:27	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:04:10.238    06:14:27	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:04:10.238    06:14:27	-- setup/common.sh@17 -- # local get=AnonHugePages
00:04:10.238    06:14:27	-- setup/common.sh@18 -- # local node=
00:04:10.238    06:14:27	-- setup/common.sh@19 -- # local var val
00:04:10.238    06:14:27	-- setup/common.sh@20 -- # local mem_f mem
00:04:10.238    06:14:27	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:10.238    06:14:27	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:10.238    06:14:27	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:10.238    06:14:27	-- setup/common.sh@28 -- # mapfile -t mem
00:04:10.238    06:14:27	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238     06:14:27	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         8998976 kB' 'MemAvailable:   10510652 kB' 'Buffers:            2684 kB' 'Cached:          1722736 kB' 'SwapCached:            0 kB' 'Active:           498404 kB' 'Inactive:        1345764 kB' 'Active(anon):     129232 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345764 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               132 kB' 'Writeback:             0 kB' 'AnonPages:        120400 kB' 'Mapped:            50912 kB' 'Shmem:             10484 kB' 'KReclaimable:      67780 kB' 'Slab:             162872 kB' 'SReclaimable:      67780 kB' 'SUnreclaim:        95092 kB' 'KernelStack:        6440 kB' 'PageTables:         4220 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13983872 kB' 'Committed_AS:     312644 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55192 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:     512' 'HugePages_Free:      512' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         1048576 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.238    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.238    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.239    06:14:27	-- setup/common.sh@33 -- # echo 0
00:04:10.239    06:14:27	-- setup/common.sh@33 -- # return 0
00:04:10.239   06:14:27	-- setup/hugepages.sh@97 -- # anon=0
00:04:10.239    06:14:27	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:04:10.239    06:14:27	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:10.239    06:14:27	-- setup/common.sh@18 -- # local node=
00:04:10.239    06:14:27	-- setup/common.sh@19 -- # local var val
00:04:10.239    06:14:27	-- setup/common.sh@20 -- # local mem_f mem
00:04:10.239    06:14:27	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:10.239    06:14:27	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:10.239    06:14:27	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:10.239    06:14:27	-- setup/common.sh@28 -- # mapfile -t mem
00:04:10.239    06:14:27	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239     06:14:27	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         8998976 kB' 'MemAvailable:   10510652 kB' 'Buffers:            2684 kB' 'Cached:          1722736 kB' 'SwapCached:            0 kB' 'Active:           497964 kB' 'Inactive:        1345764 kB' 'Active(anon):     128792 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345764 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               132 kB' 'Writeback:             0 kB' 'AnonPages:        119880 kB' 'Mapped:            50764 kB' 'Shmem:             10484 kB' 'KReclaimable:      67780 kB' 'Slab:             162896 kB' 'SReclaimable:      67780 kB' 'SUnreclaim:        95116 kB' 'KernelStack:        6448 kB' 'PageTables:         4340 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13983872 kB' 'Committed_AS:     312644 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55176 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:     512' 'HugePages_Free:      512' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         1048576 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.239    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.239    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.240    06:14:27	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.240    06:14:27	-- setup/common.sh@33 -- # echo 0
00:04:10.240    06:14:27	-- setup/common.sh@33 -- # return 0
00:04:10.240   06:14:27	-- setup/hugepages.sh@99 -- # surp=0
00:04:10.240    06:14:27	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:04:10.240    06:14:27	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:04:10.240    06:14:27	-- setup/common.sh@18 -- # local node=
00:04:10.240    06:14:27	-- setup/common.sh@19 -- # local var val
00:04:10.240    06:14:27	-- setup/common.sh@20 -- # local mem_f mem
00:04:10.240    06:14:27	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:10.240    06:14:27	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:10.240    06:14:27	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:10.240    06:14:27	-- setup/common.sh@28 -- # mapfile -t mem
00:04:10.240    06:14:27	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.240    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241     06:14:27	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         8998976 kB' 'MemAvailable:   10510652 kB' 'Buffers:            2684 kB' 'Cached:          1722736 kB' 'SwapCached:            0 kB' 'Active:           497972 kB' 'Inactive:        1345764 kB' 'Active(anon):     128800 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345764 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               132 kB' 'Writeback:             0 kB' 'AnonPages:        119884 kB' 'Mapped:            50764 kB' 'Shmem:             10484 kB' 'KReclaimable:      67780 kB' 'Slab:             162892 kB' 'SReclaimable:      67780 kB' 'SUnreclaim:        95112 kB' 'KernelStack:        6448 kB' 'PageTables:         4340 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13983872 kB' 'Committed_AS:     312644 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55176 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:     512' 'HugePages_Free:      512' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         1048576 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.241    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.241    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:10.242    06:14:27	-- setup/common.sh@33 -- # echo 0
00:04:10.242    06:14:27	-- setup/common.sh@33 -- # return 0
00:04:10.242   06:14:27	-- setup/hugepages.sh@100 -- # resv=0
00:04:10.242   06:14:27	-- setup/hugepages.sh@102 -- # echo nr_hugepages=512
00:04:10.242  nr_hugepages=512
00:04:10.242  resv_hugepages=0
00:04:10.242  surplus_hugepages=0
00:04:10.242  anon_hugepages=0
00:04:10.242   06:14:27	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:04:10.242   06:14:27	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:04:10.242   06:14:27	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:04:10.242   06:14:27	-- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv ))
00:04:10.242   06:14:27	-- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages ))
00:04:10.242    06:14:27	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:04:10.242    06:14:27	-- setup/common.sh@17 -- # local get=HugePages_Total
00:04:10.242    06:14:27	-- setup/common.sh@18 -- # local node=
00:04:10.242    06:14:27	-- setup/common.sh@19 -- # local var val
00:04:10.242    06:14:27	-- setup/common.sh@20 -- # local mem_f mem
00:04:10.242    06:14:27	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:10.242    06:14:27	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:10.242    06:14:27	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:10.242    06:14:27	-- setup/common.sh@28 -- # mapfile -t mem
00:04:10.242    06:14:27	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.242     06:14:27	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         8998976 kB' 'MemAvailable:   10510652 kB' 'Buffers:            2684 kB' 'Cached:          1722736 kB' 'SwapCached:            0 kB' 'Active:           497964 kB' 'Inactive:        1345764 kB' 'Active(anon):     128792 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345764 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               132 kB' 'Writeback:             0 kB' 'AnonPages:        119880 kB' 'Mapped:            50764 kB' 'Shmem:             10484 kB' 'KReclaimable:      67780 kB' 'Slab:             162888 kB' 'SReclaimable:      67780 kB' 'SUnreclaim:        95108 kB' 'KernelStack:        6448 kB' 'PageTables:         4340 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13983872 kB' 'Committed_AS:     312644 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55192 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:     512' 'HugePages_Free:      512' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         1048576 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.242    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.242    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.502    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.503    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.503    06:14:27	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:10.503    06:14:27	-- setup/common.sh@33 -- # echo 512
00:04:10.503    06:14:27	-- setup/common.sh@33 -- # return 0
00:04:10.503   06:14:27	-- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv ))
00:04:10.503   06:14:27	-- setup/hugepages.sh@112 -- # get_nodes
00:04:10.503   06:14:27	-- setup/hugepages.sh@27 -- # local node
00:04:10.503   06:14:27	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:04:10.503   06:14:27	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512
00:04:10.503   06:14:27	-- setup/hugepages.sh@32 -- # no_nodes=1
00:04:10.503   06:14:27	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:04:10.504   06:14:27	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:04:10.504   06:14:27	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:04:10.504    06:14:27	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:04:10.504    06:14:27	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:10.504    06:14:27	-- setup/common.sh@18 -- # local node=0
00:04:10.504    06:14:27	-- setup/common.sh@19 -- # local var val
00:04:10.504    06:14:27	-- setup/common.sh@20 -- # local mem_f mem
00:04:10.504    06:14:27	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:10.504    06:14:27	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:04:10.504    06:14:27	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:04:10.504    06:14:27	-- setup/common.sh@28 -- # mapfile -t mem
00:04:10.504    06:14:27	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504     06:14:27	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         9001188 kB' 'MemUsed:         3237924 kB' 'SwapCached:            0 kB' 'Active:           498064 kB' 'Inactive:        1345764 kB' 'Active(anon):     128892 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345764 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'Dirty:               132 kB' 'Writeback:             0 kB' 'FilePages:       1725420 kB' 'Mapped:            50764 kB' 'AnonPages:        120004 kB' 'Shmem:             10484 kB' 'KernelStack:        6464 kB' 'PageTables:         4392 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:      67780 kB' 'Slab:             162888 kB' 'SReclaimable:      67780 kB' 'SUnreclaim:        95108 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:   512' 'HugePages_Free:    512' 'HugePages_Surp:      0'
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.504    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.504    06:14:27	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.505    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.505    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.505    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.505    06:14:27	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.505    06:14:27	-- setup/common.sh@33 -- # echo 0
00:04:10.505    06:14:27	-- setup/common.sh@33 -- # return 0
00:04:10.505   06:14:27	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:04:10.505   06:14:27	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:04:10.505   06:14:27	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:04:10.505   06:14:27	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:04:10.505  node0=512 expecting 512
00:04:10.505   06:14:27	-- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512'
00:04:10.505   06:14:27	-- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]]
00:04:10.505  
00:04:10.505  real	0m0.561s
00:04:10.505  user	0m0.294s
00:04:10.505  sys	0m0.271s
00:04:10.505   06:14:27	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:10.505  ************************************
00:04:10.505  END TEST custom_alloc
00:04:10.505  ************************************
00:04:10.505   06:14:27	-- common/autotest_common.sh@10 -- # set +x
00:04:10.505   06:14:27	-- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc
00:04:10.505   06:14:27	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:10.505   06:14:27	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:10.505   06:14:27	-- common/autotest_common.sh@10 -- # set +x
00:04:10.505  ************************************
00:04:10.505  START TEST no_shrink_alloc
00:04:10.505  ************************************
00:04:10.505   06:14:27	-- common/autotest_common.sh@1114 -- # no_shrink_alloc
00:04:10.505   06:14:27	-- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0
00:04:10.505   06:14:27	-- setup/hugepages.sh@49 -- # local size=2097152
00:04:10.505   06:14:27	-- setup/hugepages.sh@50 -- # (( 2 > 1 ))
00:04:10.505   06:14:27	-- setup/hugepages.sh@51 -- # shift
00:04:10.505   06:14:27	-- setup/hugepages.sh@52 -- # node_ids=('0')
00:04:10.505   06:14:27	-- setup/hugepages.sh@52 -- # local node_ids
00:04:10.505   06:14:27	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:04:10.505   06:14:27	-- setup/hugepages.sh@57 -- # nr_hugepages=1024
00:04:10.505   06:14:27	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0
00:04:10.505   06:14:27	-- setup/hugepages.sh@62 -- # user_nodes=('0')
00:04:10.505   06:14:27	-- setup/hugepages.sh@62 -- # local user_nodes
00:04:10.505   06:14:27	-- setup/hugepages.sh@64 -- # local _nr_hugepages=1024
00:04:10.505   06:14:27	-- setup/hugepages.sh@65 -- # local _no_nodes=1
00:04:10.505   06:14:27	-- setup/hugepages.sh@67 -- # nodes_test=()
00:04:10.505   06:14:27	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:04:10.505   06:14:27	-- setup/hugepages.sh@69 -- # (( 1 > 0 ))
00:04:10.505   06:14:27	-- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}"
00:04:10.505   06:14:27	-- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024
00:04:10.505   06:14:27	-- setup/hugepages.sh@73 -- # return 0
00:04:10.505   06:14:27	-- setup/hugepages.sh@198 -- # setup output
00:04:10.505   06:14:27	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:10.505   06:14:27	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:04:10.764  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:04:10.764  0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver
00:04:10.764  0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver
00:04:10.764   06:14:27	-- setup/hugepages.sh@199 -- # verify_nr_hugepages
00:04:10.764   06:14:27	-- setup/hugepages.sh@89 -- # local node
00:04:10.764   06:14:27	-- setup/hugepages.sh@90 -- # local sorted_t
00:04:10.764   06:14:27	-- setup/hugepages.sh@91 -- # local sorted_s
00:04:10.764   06:14:27	-- setup/hugepages.sh@92 -- # local surp
00:04:10.764   06:14:27	-- setup/hugepages.sh@93 -- # local resv
00:04:10.764   06:14:27	-- setup/hugepages.sh@94 -- # local anon
00:04:10.764   06:14:27	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:04:10.764    06:14:27	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:04:10.764    06:14:27	-- setup/common.sh@17 -- # local get=AnonHugePages
00:04:10.764    06:14:27	-- setup/common.sh@18 -- # local node=
00:04:10.764    06:14:27	-- setup/common.sh@19 -- # local var val
00:04:10.764    06:14:27	-- setup/common.sh@20 -- # local mem_f mem
00:04:10.764    06:14:27	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:10.764    06:14:27	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:10.764    06:14:27	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:10.764    06:14:27	-- setup/common.sh@28 -- # mapfile -t mem
00:04:10.764    06:14:27	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:10.764    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.764    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.765     06:14:27	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         7954544 kB' 'MemAvailable:    9466224 kB' 'Buffers:            2684 kB' 'Cached:          1722740 kB' 'SwapCached:            0 kB' 'Active:           498640 kB' 'Inactive:        1345768 kB' 'Active(anon):     129468 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345768 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               132 kB' 'Writeback:             0 kB' 'AnonPages:        120640 kB' 'Mapped:            50944 kB' 'Shmem:             10484 kB' 'KReclaimable:      67780 kB' 'Slab:             162876 kB' 'SReclaimable:      67780 kB' 'SUnreclaim:        95096 kB' 'KernelStack:        6472 kB' 'PageTables:         4328 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13459584 kB' 'Committed_AS:     312844 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55160 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.765    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.765    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:10.766    06:14:27	-- setup/common.sh@33 -- # echo 0
00:04:10.766    06:14:27	-- setup/common.sh@33 -- # return 0
00:04:10.766   06:14:27	-- setup/hugepages.sh@97 -- # anon=0
00:04:10.766    06:14:27	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:04:10.766    06:14:27	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:10.766    06:14:27	-- setup/common.sh@18 -- # local node=
00:04:10.766    06:14:27	-- setup/common.sh@19 -- # local var val
00:04:10.766    06:14:27	-- setup/common.sh@20 -- # local mem_f mem
00:04:10.766    06:14:27	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:10.766    06:14:27	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:10.766    06:14:27	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:10.766    06:14:27	-- setup/common.sh@28 -- # mapfile -t mem
00:04:10.766    06:14:27	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.766     06:14:27	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         7954804 kB' 'MemAvailable:    9466484 kB' 'Buffers:            2684 kB' 'Cached:          1722740 kB' 'SwapCached:            0 kB' 'Active:           498520 kB' 'Inactive:        1345768 kB' 'Active(anon):     129348 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345768 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               132 kB' 'Writeback:             0 kB' 'AnonPages:        120312 kB' 'Mapped:            50764 kB' 'Shmem:             10484 kB' 'KReclaimable:      67780 kB' 'Slab:             162880 kB' 'SReclaimable:      67780 kB' 'SUnreclaim:        95100 kB' 'KernelStack:        6512 kB' 'PageTables:         4528 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13459584 kB' 'Committed_AS:     312844 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55160 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.766    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.766    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.767    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.767    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.767    06:14:27	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.767    06:14:27	-- setup/common.sh@32 -- # continue
00:04:10.767    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:10.767    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:10.767    06:14:27	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:10.767    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.028    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.028    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.029    06:14:27	-- setup/common.sh@33 -- # echo 0
00:04:11.029    06:14:27	-- setup/common.sh@33 -- # return 0
00:04:11.029   06:14:27	-- setup/hugepages.sh@99 -- # surp=0
00:04:11.029    06:14:27	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:04:11.029    06:14:27	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:04:11.029    06:14:27	-- setup/common.sh@18 -- # local node=
00:04:11.029    06:14:27	-- setup/common.sh@19 -- # local var val
00:04:11.029    06:14:27	-- setup/common.sh@20 -- # local mem_f mem
00:04:11.029    06:14:27	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:11.029    06:14:27	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:11.029    06:14:27	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:11.029    06:14:27	-- setup/common.sh@28 -- # mapfile -t mem
00:04:11.029    06:14:27	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029     06:14:27	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         7954856 kB' 'MemAvailable:    9466536 kB' 'Buffers:            2684 kB' 'Cached:          1722740 kB' 'SwapCached:            0 kB' 'Active:           497956 kB' 'Inactive:        1345768 kB' 'Active(anon):     128784 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345768 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               132 kB' 'Writeback:             0 kB' 'AnonPages:        119908 kB' 'Mapped:            50764 kB' 'Shmem:             10484 kB' 'KReclaimable:      67780 kB' 'Slab:             162872 kB' 'SReclaimable:      67780 kB' 'SUnreclaim:        95092 kB' 'KernelStack:        6448 kB' 'PageTables:         4348 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13459584 kB' 'Committed_AS:     312844 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55160 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.029    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.029    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.030    06:14:27	-- setup/common.sh@33 -- # echo 0
00:04:11.030    06:14:27	-- setup/common.sh@33 -- # return 0
00:04:11.030  nr_hugepages=1024
00:04:11.030   06:14:27	-- setup/hugepages.sh@100 -- # resv=0
00:04:11.030   06:14:27	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1024
00:04:11.030  resv_hugepages=0
00:04:11.030   06:14:27	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:04:11.030  surplus_hugepages=0
00:04:11.030  anon_hugepages=0
00:04:11.030   06:14:27	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:04:11.030   06:14:27	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:04:11.030   06:14:27	-- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv ))
00:04:11.030   06:14:27	-- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages ))
00:04:11.030    06:14:27	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:04:11.030    06:14:27	-- setup/common.sh@17 -- # local get=HugePages_Total
00:04:11.030    06:14:27	-- setup/common.sh@18 -- # local node=
00:04:11.030    06:14:27	-- setup/common.sh@19 -- # local var val
00:04:11.030    06:14:27	-- setup/common.sh@20 -- # local mem_f mem
00:04:11.030    06:14:27	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:11.030    06:14:27	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:11.030    06:14:27	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:11.030    06:14:27	-- setup/common.sh@28 -- # mapfile -t mem
00:04:11.030    06:14:27	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030     06:14:27	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         7955256 kB' 'MemAvailable:    9466936 kB' 'Buffers:            2684 kB' 'Cached:          1722740 kB' 'SwapCached:            0 kB' 'Active:           497968 kB' 'Inactive:        1345768 kB' 'Active(anon):     128796 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345768 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               132 kB' 'Writeback:             0 kB' 'AnonPages:        119924 kB' 'Mapped:            50764 kB' 'Shmem:             10484 kB' 'KReclaimable:      67780 kB' 'Slab:             162868 kB' 'SReclaimable:      67780 kB' 'SUnreclaim:        95088 kB' 'KernelStack:        6448 kB' 'PageTables:         4348 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13459584 kB' 'Committed_AS:     312844 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55160 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.030    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.030    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.031    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.031    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.032    06:14:27	-- setup/common.sh@33 -- # echo 1024
00:04:11.032    06:14:27	-- setup/common.sh@33 -- # return 0
00:04:11.032   06:14:27	-- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv ))
00:04:11.032   06:14:27	-- setup/hugepages.sh@112 -- # get_nodes
00:04:11.032   06:14:27	-- setup/hugepages.sh@27 -- # local node
00:04:11.032   06:14:27	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:04:11.032   06:14:27	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024
00:04:11.032   06:14:27	-- setup/hugepages.sh@32 -- # no_nodes=1
00:04:11.032   06:14:27	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:04:11.032   06:14:27	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:04:11.032   06:14:27	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:04:11.032    06:14:27	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:04:11.032    06:14:27	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:11.032    06:14:27	-- setup/common.sh@18 -- # local node=0
00:04:11.032    06:14:27	-- setup/common.sh@19 -- # local var val
00:04:11.032    06:14:27	-- setup/common.sh@20 -- # local mem_f mem
00:04:11.032    06:14:27	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:11.032    06:14:27	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:04:11.032    06:14:27	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:04:11.032    06:14:27	-- setup/common.sh@28 -- # mapfile -t mem
00:04:11.032    06:14:27	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:11.032     06:14:27	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         7955256 kB' 'MemUsed:         4283856 kB' 'SwapCached:            0 kB' 'Active:           498024 kB' 'Inactive:        1345768 kB' 'Active(anon):     128852 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345768 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'Dirty:               132 kB' 'Writeback:             0 kB' 'FilePages:       1725424 kB' 'Mapped:            50764 kB' 'AnonPages:        120024 kB' 'Shmem:             10484 kB' 'KernelStack:        6464 kB' 'PageTables:         4400 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:      67780 kB' 'Slab:             162868 kB' 'SReclaimable:      67780 kB' 'SUnreclaim:        95088 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:  1024' 'HugePages_Free:   1024' 'HugePages_Surp:      0'
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.032    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.032    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # continue
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # IFS=': '
00:04:11.033    06:14:27	-- setup/common.sh@31 -- # read -r var val _
00:04:11.033    06:14:27	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.033    06:14:27	-- setup/common.sh@33 -- # echo 0
00:04:11.033    06:14:27	-- setup/common.sh@33 -- # return 0
00:04:11.033   06:14:27	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:04:11.033   06:14:27	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:04:11.033  node0=1024 expecting 1024
00:04:11.033   06:14:27	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:04:11.033   06:14:27	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:04:11.033   06:14:27	-- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024'
00:04:11.033   06:14:27	-- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]]
00:04:11.033   06:14:27	-- setup/hugepages.sh@202 -- # CLEAR_HUGE=no
00:04:11.033   06:14:27	-- setup/hugepages.sh@202 -- # NRHUGE=512
00:04:11.033   06:14:27	-- setup/hugepages.sh@202 -- # setup output
00:04:11.033   06:14:27	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:11.033   06:14:27	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:04:11.291  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:04:11.291  0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver
00:04:11.291  0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver
00:04:11.291  INFO: Requested 512 hugepages but 1024 already allocated on node0
00:04:11.291   06:14:28	-- setup/hugepages.sh@204 -- # verify_nr_hugepages
00:04:11.291   06:14:28	-- setup/hugepages.sh@89 -- # local node
00:04:11.291   06:14:28	-- setup/hugepages.sh@90 -- # local sorted_t
00:04:11.291   06:14:28	-- setup/hugepages.sh@91 -- # local sorted_s
00:04:11.291   06:14:28	-- setup/hugepages.sh@92 -- # local surp
00:04:11.291   06:14:28	-- setup/hugepages.sh@93 -- # local resv
00:04:11.291   06:14:28	-- setup/hugepages.sh@94 -- # local anon
00:04:11.291   06:14:28	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:04:11.291    06:14:28	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:04:11.291    06:14:28	-- setup/common.sh@17 -- # local get=AnonHugePages
00:04:11.291    06:14:28	-- setup/common.sh@18 -- # local node=
00:04:11.291    06:14:28	-- setup/common.sh@19 -- # local var val
00:04:11.291    06:14:28	-- setup/common.sh@20 -- # local mem_f mem
00:04:11.291    06:14:28	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:11.291    06:14:28	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:11.291    06:14:28	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:11.291    06:14:28	-- setup/common.sh@28 -- # mapfile -t mem
00:04:11.291    06:14:28	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.291     06:14:28	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         7954184 kB' 'MemAvailable:    9465852 kB' 'Buffers:            2684 kB' 'Cached:          1722740 kB' 'SwapCached:            0 kB' 'Active:           495836 kB' 'Inactive:        1345768 kB' 'Active(anon):     126664 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345768 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               132 kB' 'Writeback:             0 kB' 'AnonPages:        117764 kB' 'Mapped:            50048 kB' 'Shmem:             10484 kB' 'KReclaimable:      67756 kB' 'Slab:             162624 kB' 'SReclaimable:      67756 kB' 'SUnreclaim:        94868 kB' 'KernelStack:        6376 kB' 'PageTables:         4092 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13459584 kB' 'Committed_AS:     294556 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55128 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.291    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.291    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.551    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.551    06:14:28	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:11.552    06:14:28	-- setup/common.sh@33 -- # echo 0
00:04:11.552    06:14:28	-- setup/common.sh@33 -- # return 0
00:04:11.552   06:14:28	-- setup/hugepages.sh@97 -- # anon=0
00:04:11.552    06:14:28	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:04:11.552    06:14:28	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:11.552    06:14:28	-- setup/common.sh@18 -- # local node=
00:04:11.552    06:14:28	-- setup/common.sh@19 -- # local var val
00:04:11.552    06:14:28	-- setup/common.sh@20 -- # local mem_f mem
00:04:11.552    06:14:28	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:11.552    06:14:28	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:11.552    06:14:28	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:11.552    06:14:28	-- setup/common.sh@28 -- # mapfile -t mem
00:04:11.552    06:14:28	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552     06:14:28	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         7954440 kB' 'MemAvailable:    9466108 kB' 'Buffers:            2684 kB' 'Cached:          1722740 kB' 'SwapCached:            0 kB' 'Active:           495076 kB' 'Inactive:        1345768 kB' 'Active(anon):     125904 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345768 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               132 kB' 'Writeback:             0 kB' 'AnonPages:        116988 kB' 'Mapped:            50056 kB' 'Shmem:             10484 kB' 'KReclaimable:      67756 kB' 'Slab:             162616 kB' 'SReclaimable:      67756 kB' 'SUnreclaim:        94860 kB' 'KernelStack:        6312 kB' 'PageTables:         3876 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13459584 kB' 'Committed_AS:     294556 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55096 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.552    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.552    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.553    06:14:28	-- setup/common.sh@33 -- # echo 0
00:04:11.553    06:14:28	-- setup/common.sh@33 -- # return 0
00:04:11.553   06:14:28	-- setup/hugepages.sh@99 -- # surp=0
00:04:11.553    06:14:28	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:04:11.553    06:14:28	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:04:11.553    06:14:28	-- setup/common.sh@18 -- # local node=
00:04:11.553    06:14:28	-- setup/common.sh@19 -- # local var val
00:04:11.553    06:14:28	-- setup/common.sh@20 -- # local mem_f mem
00:04:11.553    06:14:28	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:11.553    06:14:28	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:11.553    06:14:28	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:11.553    06:14:28	-- setup/common.sh@28 -- # mapfile -t mem
00:04:11.553    06:14:28	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553     06:14:28	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         7954692 kB' 'MemAvailable:    9466360 kB' 'Buffers:            2684 kB' 'Cached:          1722740 kB' 'SwapCached:            0 kB' 'Active:           495012 kB' 'Inactive:        1345768 kB' 'Active(anon):     125840 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345768 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               132 kB' 'Writeback:             0 kB' 'AnonPages:        116928 kB' 'Mapped:            49944 kB' 'Shmem:             10484 kB' 'KReclaimable:      67756 kB' 'Slab:             162616 kB' 'SReclaimable:      67756 kB' 'SUnreclaim:        94860 kB' 'KernelStack:        6336 kB' 'PageTables:         3868 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13459584 kB' 'Committed_AS:     294556 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55096 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.553    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.553    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.554    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.554    06:14:28	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:11.554    06:14:28	-- setup/common.sh@33 -- # echo 0
00:04:11.554    06:14:28	-- setup/common.sh@33 -- # return 0
00:04:11.554   06:14:28	-- setup/hugepages.sh@100 -- # resv=0
00:04:11.554  nr_hugepages=1024
00:04:11.554  resv_hugepages=0
00:04:11.554  surplus_hugepages=0
00:04:11.554  anon_hugepages=0
00:04:11.554   06:14:28	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1024
00:04:11.554   06:14:28	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:04:11.554   06:14:28	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:04:11.554   06:14:28	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:04:11.554   06:14:28	-- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv ))
00:04:11.554   06:14:28	-- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages ))
00:04:11.554    06:14:28	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:04:11.554    06:14:28	-- setup/common.sh@17 -- # local get=HugePages_Total
00:04:11.554    06:14:28	-- setup/common.sh@18 -- # local node=
00:04:11.555    06:14:28	-- setup/common.sh@19 -- # local var val
00:04:11.555    06:14:28	-- setup/common.sh@20 -- # local mem_f mem
00:04:11.555    06:14:28	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:11.555    06:14:28	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:11.555    06:14:28	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:11.555    06:14:28	-- setup/common.sh@28 -- # mapfile -t mem
00:04:11.555    06:14:28	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555     06:14:28	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         7954692 kB' 'MemAvailable:    9466360 kB' 'Buffers:            2684 kB' 'Cached:          1722740 kB' 'SwapCached:            0 kB' 'Active:           495244 kB' 'Inactive:        1345768 kB' 'Active(anon):     126072 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345768 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:               132 kB' 'Writeback:             0 kB' 'AnonPages:        117160 kB' 'Mapped:            49944 kB' 'Shmem:             10484 kB' 'KReclaimable:      67756 kB' 'Slab:             162616 kB' 'SReclaimable:      67756 kB' 'SUnreclaim:        94860 kB' 'KernelStack:        6336 kB' 'PageTables:         3868 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    13459584 kB' 'Committed_AS:     294556 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       55096 kB' 'VmallocChunk:          0 kB' 'Percpu:             6384 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      198508 kB' 'DirectMap2M:     5044224 kB' 'DirectMap1G:     9437184 kB'
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.555    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.555    06:14:28	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:11.556    06:14:28	-- setup/common.sh@33 -- # echo 1024
00:04:11.556    06:14:28	-- setup/common.sh@33 -- # return 0
00:04:11.556   06:14:28	-- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv ))
00:04:11.556   06:14:28	-- setup/hugepages.sh@112 -- # get_nodes
00:04:11.556   06:14:28	-- setup/hugepages.sh@27 -- # local node
00:04:11.556   06:14:28	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:04:11.556   06:14:28	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024
00:04:11.556   06:14:28	-- setup/hugepages.sh@32 -- # no_nodes=1
00:04:11.556   06:14:28	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:04:11.556   06:14:28	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:04:11.556   06:14:28	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:04:11.556    06:14:28	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:04:11.556    06:14:28	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:11.556    06:14:28	-- setup/common.sh@18 -- # local node=0
00:04:11.556    06:14:28	-- setup/common.sh@19 -- # local var val
00:04:11.556    06:14:28	-- setup/common.sh@20 -- # local mem_f mem
00:04:11.556    06:14:28	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:11.556    06:14:28	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:04:11.556    06:14:28	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:04:11.556    06:14:28	-- setup/common.sh@28 -- # mapfile -t mem
00:04:11.556    06:14:28	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556     06:14:28	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12239112 kB' 'MemFree:         7954692 kB' 'MemUsed:         4284420 kB' 'SwapCached:            0 kB' 'Active:           495128 kB' 'Inactive:        1345768 kB' 'Active(anon):     125956 kB' 'Inactive(anon):        0 kB' 'Active(file):     369172 kB' 'Inactive(file):  1345768 kB' 'Unevictable:        1536 kB' 'Mlocked:               0 kB' 'Dirty:               132 kB' 'Writeback:             0 kB' 'FilePages:       1725424 kB' 'Mapped:            49944 kB' 'AnonPages:        117040 kB' 'Shmem:             10484 kB' 'KernelStack:        6352 kB' 'PageTables:         3920 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:      67756 kB' 'Slab:             162616 kB' 'SReclaimable:      67756 kB' 'SUnreclaim:        94860 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:  1024' 'HugePages_Free:   1024' 'HugePages_Surp:      0'
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.556    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.556    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # continue
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # IFS=': '
00:04:11.557    06:14:28	-- setup/common.sh@31 -- # read -r var val _
00:04:11.557    06:14:28	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:11.557    06:14:28	-- setup/common.sh@33 -- # echo 0
00:04:11.557    06:14:28	-- setup/common.sh@33 -- # return 0
00:04:11.557   06:14:28	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:04:11.557  node0=1024 expecting 1024
00:04:11.557  ************************************
00:04:11.557  END TEST no_shrink_alloc
00:04:11.557  ************************************
00:04:11.557   06:14:28	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:04:11.557   06:14:28	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:04:11.557   06:14:28	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:04:11.557   06:14:28	-- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024'
00:04:11.557   06:14:28	-- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]]
00:04:11.557  
00:04:11.557  real	0m1.112s
00:04:11.557  user	0m0.546s
00:04:11.557  sys	0m0.568s
00:04:11.557   06:14:28	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:11.557   06:14:28	-- common/autotest_common.sh@10 -- # set +x
00:04:11.557   06:14:28	-- setup/hugepages.sh@217 -- # clear_hp
00:04:11.557   06:14:28	-- setup/hugepages.sh@37 -- # local node hp
00:04:11.557   06:14:28	-- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}"
00:04:11.557   06:14:28	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:04:11.557   06:14:28	-- setup/hugepages.sh@41 -- # echo 0
00:04:11.557   06:14:28	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:04:11.557   06:14:28	-- setup/hugepages.sh@41 -- # echo 0
00:04:11.557   06:14:28	-- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes
00:04:11.557   06:14:28	-- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes
00:04:11.557  
00:04:11.557  real	0m4.880s
00:04:11.557  user	0m2.327s
00:04:11.557  sys	0m2.488s
00:04:11.557   06:14:28	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:11.557   06:14:28	-- common/autotest_common.sh@10 -- # set +x
00:04:11.557  ************************************
00:04:11.557  END TEST hugepages
00:04:11.557  ************************************
00:04:11.816   06:14:28	-- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh
00:04:11.816   06:14:28	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:11.816   06:14:28	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:11.816   06:14:28	-- common/autotest_common.sh@10 -- # set +x
00:04:11.816  ************************************
00:04:11.816  START TEST driver
00:04:11.816  ************************************
00:04:11.816   06:14:28	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh
00:04:11.816  * Looking for test storage...
00:04:11.816  * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup
00:04:11.816     06:14:28	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:04:11.816      06:14:28	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:04:11.816      06:14:28	-- common/autotest_common.sh@1690 -- # lcov --version
00:04:11.816     06:14:28	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:04:11.816     06:14:28	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:04:11.816     06:14:28	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:04:11.816     06:14:28	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:04:11.816     06:14:28	-- scripts/common.sh@335 -- # IFS=.-:
00:04:11.816     06:14:28	-- scripts/common.sh@335 -- # read -ra ver1
00:04:11.816     06:14:28	-- scripts/common.sh@336 -- # IFS=.-:
00:04:11.816     06:14:28	-- scripts/common.sh@336 -- # read -ra ver2
00:04:11.816     06:14:28	-- scripts/common.sh@337 -- # local 'op=<'
00:04:11.816     06:14:28	-- scripts/common.sh@339 -- # ver1_l=2
00:04:11.816     06:14:28	-- scripts/common.sh@340 -- # ver2_l=1
00:04:11.816     06:14:28	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:04:11.816     06:14:28	-- scripts/common.sh@343 -- # case "$op" in
00:04:11.816     06:14:28	-- scripts/common.sh@344 -- # : 1
00:04:11.816     06:14:28	-- scripts/common.sh@363 -- # (( v = 0 ))
00:04:11.816     06:14:28	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:11.816      06:14:28	-- scripts/common.sh@364 -- # decimal 1
00:04:11.816      06:14:28	-- scripts/common.sh@352 -- # local d=1
00:04:11.816      06:14:28	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:11.816      06:14:28	-- scripts/common.sh@354 -- # echo 1
00:04:11.816     06:14:28	-- scripts/common.sh@364 -- # ver1[v]=1
00:04:11.816      06:14:28	-- scripts/common.sh@365 -- # decimal 2
00:04:11.816      06:14:28	-- scripts/common.sh@352 -- # local d=2
00:04:11.816      06:14:28	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:11.816      06:14:28	-- scripts/common.sh@354 -- # echo 2
00:04:11.816     06:14:28	-- scripts/common.sh@365 -- # ver2[v]=2
00:04:11.816     06:14:28	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:04:11.816     06:14:28	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:04:11.816     06:14:28	-- scripts/common.sh@367 -- # return 0
00:04:11.816     06:14:28	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:11.816     06:14:28	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:04:11.816  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:11.816  		--rc genhtml_branch_coverage=1
00:04:11.816  		--rc genhtml_function_coverage=1
00:04:11.816  		--rc genhtml_legend=1
00:04:11.816  		--rc geninfo_all_blocks=1
00:04:11.816  		--rc geninfo_unexecuted_blocks=1
00:04:11.816  		
00:04:11.816  		'
00:04:11.816     06:14:28	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:04:11.816  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:11.816  		--rc genhtml_branch_coverage=1
00:04:11.816  		--rc genhtml_function_coverage=1
00:04:11.816  		--rc genhtml_legend=1
00:04:11.816  		--rc geninfo_all_blocks=1
00:04:11.816  		--rc geninfo_unexecuted_blocks=1
00:04:11.816  		
00:04:11.816  		'
00:04:11.816     06:14:28	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:04:11.816  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:11.816  		--rc genhtml_branch_coverage=1
00:04:11.816  		--rc genhtml_function_coverage=1
00:04:11.816  		--rc genhtml_legend=1
00:04:11.816  		--rc geninfo_all_blocks=1
00:04:11.816  		--rc geninfo_unexecuted_blocks=1
00:04:11.816  		
00:04:11.816  		'
00:04:11.816     06:14:28	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:04:11.816  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:11.816  		--rc genhtml_branch_coverage=1
00:04:11.816  		--rc genhtml_function_coverage=1
00:04:11.816  		--rc genhtml_legend=1
00:04:11.816  		--rc geninfo_all_blocks=1
00:04:11.816  		--rc geninfo_unexecuted_blocks=1
00:04:11.816  		
00:04:11.816  		'
00:04:11.816   06:14:28	-- setup/driver.sh@68 -- # setup reset
00:04:11.816   06:14:28	-- setup/common.sh@9 -- # [[ reset == output ]]
00:04:11.816   06:14:28	-- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:04:12.384   06:14:29	-- setup/driver.sh@69 -- # run_test guess_driver guess_driver
00:04:12.384   06:14:29	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:12.384   06:14:29	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:12.384   06:14:29	-- common/autotest_common.sh@10 -- # set +x
00:04:12.384  ************************************
00:04:12.384  START TEST guess_driver
00:04:12.384  ************************************
00:04:12.384   06:14:29	-- common/autotest_common.sh@1114 -- # guess_driver
00:04:12.384   06:14:29	-- setup/driver.sh@46 -- # local driver setup_driver marker
00:04:12.384   06:14:29	-- setup/driver.sh@47 -- # local fail=0
00:04:12.384    06:14:29	-- setup/driver.sh@49 -- # pick_driver
00:04:12.384    06:14:29	-- setup/driver.sh@36 -- # vfio
00:04:12.384    06:14:29	-- setup/driver.sh@21 -- # local iommu_grups
00:04:12.384    06:14:29	-- setup/driver.sh@22 -- # local unsafe_vfio
00:04:12.384    06:14:29	-- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]]
00:04:12.384    06:14:29	-- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*)
00:04:12.384    06:14:29	-- setup/driver.sh@29 -- # (( 0 > 0 ))
00:04:12.384    06:14:29	-- setup/driver.sh@29 -- # [[ '' == Y ]]
00:04:12.384    06:14:29	-- setup/driver.sh@32 -- # return 1
00:04:12.384    06:14:29	-- setup/driver.sh@38 -- # uio
00:04:12.384    06:14:29	-- setup/driver.sh@17 -- # is_driver uio_pci_generic
00:04:12.384    06:14:29	-- setup/driver.sh@14 -- # mod uio_pci_generic
00:04:12.384     06:14:29	-- setup/driver.sh@12 -- # dep uio_pci_generic
00:04:12.384     06:14:29	-- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic
00:04:12.384    06:14:29	-- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 
00:04:12.384  insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz  == *\.\k\o* ]]
00:04:12.384    06:14:29	-- setup/driver.sh@39 -- # echo uio_pci_generic
00:04:12.384  Looking for driver=uio_pci_generic
00:04:12.384   06:14:29	-- setup/driver.sh@49 -- # driver=uio_pci_generic
00:04:12.384   06:14:29	-- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]]
00:04:12.384   06:14:29	-- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic'
00:04:12.384   06:14:29	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:04:12.384    06:14:29	-- setup/driver.sh@45 -- # setup output config
00:04:12.384    06:14:29	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:12.384    06:14:29	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config
00:04:12.951   06:14:29	-- setup/driver.sh@58 -- # [[ devices: == \-\> ]]
00:04:12.951   06:14:29	-- setup/driver.sh@58 -- # continue
00:04:12.951   06:14:29	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:04:13.210   06:14:29	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:04:13.210   06:14:29	-- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]]
00:04:13.210   06:14:29	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:04:13.210   06:14:30	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:04:13.210   06:14:30	-- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]]
00:04:13.210   06:14:30	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:04:13.210   06:14:30	-- setup/driver.sh@64 -- # (( fail == 0 ))
00:04:13.210   06:14:30	-- setup/driver.sh@65 -- # setup reset
00:04:13.210   06:14:30	-- setup/common.sh@9 -- # [[ reset == output ]]
00:04:13.210   06:14:30	-- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:04:13.777  ************************************
00:04:13.777  END TEST guess_driver
00:04:13.777  ************************************
00:04:13.777  
00:04:13.777  real	0m1.367s
00:04:13.777  user	0m0.544s
00:04:13.777  sys	0m0.842s
00:04:13.777   06:14:30	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:13.777   06:14:30	-- common/autotest_common.sh@10 -- # set +x
00:04:13.777  ************************************
00:04:13.777  END TEST driver
00:04:13.777  ************************************
00:04:13.777  
00:04:13.777  real	0m2.143s
00:04:13.777  user	0m0.846s
00:04:13.777  sys	0m1.381s
00:04:13.777   06:14:30	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:13.777   06:14:30	-- common/autotest_common.sh@10 -- # set +x
00:04:13.777   06:14:30	-- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh
00:04:13.777   06:14:30	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:13.777   06:14:30	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:13.777   06:14:30	-- common/autotest_common.sh@10 -- # set +x
00:04:13.777  ************************************
00:04:13.777  START TEST devices
00:04:13.777  ************************************
00:04:13.777   06:14:30	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh
00:04:14.036  * Looking for test storage...
00:04:14.036  * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup
00:04:14.036     06:14:30	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:04:14.036      06:14:30	-- common/autotest_common.sh@1690 -- # lcov --version
00:04:14.036      06:14:30	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:04:14.036     06:14:30	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:04:14.036     06:14:30	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:04:14.036     06:14:30	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:04:14.036     06:14:30	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:04:14.036     06:14:30	-- scripts/common.sh@335 -- # IFS=.-:
00:04:14.036     06:14:30	-- scripts/common.sh@335 -- # read -ra ver1
00:04:14.036     06:14:30	-- scripts/common.sh@336 -- # IFS=.-:
00:04:14.036     06:14:30	-- scripts/common.sh@336 -- # read -ra ver2
00:04:14.036     06:14:30	-- scripts/common.sh@337 -- # local 'op=<'
00:04:14.036     06:14:30	-- scripts/common.sh@339 -- # ver1_l=2
00:04:14.036     06:14:30	-- scripts/common.sh@340 -- # ver2_l=1
00:04:14.036     06:14:30	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:04:14.036     06:14:30	-- scripts/common.sh@343 -- # case "$op" in
00:04:14.036     06:14:30	-- scripts/common.sh@344 -- # : 1
00:04:14.036     06:14:30	-- scripts/common.sh@363 -- # (( v = 0 ))
00:04:14.036     06:14:30	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:14.036      06:14:30	-- scripts/common.sh@364 -- # decimal 1
00:04:14.036      06:14:30	-- scripts/common.sh@352 -- # local d=1
00:04:14.036      06:14:30	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:14.036      06:14:30	-- scripts/common.sh@354 -- # echo 1
00:04:14.036     06:14:30	-- scripts/common.sh@364 -- # ver1[v]=1
00:04:14.036      06:14:30	-- scripts/common.sh@365 -- # decimal 2
00:04:14.036      06:14:30	-- scripts/common.sh@352 -- # local d=2
00:04:14.036      06:14:30	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:14.036      06:14:30	-- scripts/common.sh@354 -- # echo 2
00:04:14.036     06:14:30	-- scripts/common.sh@365 -- # ver2[v]=2
00:04:14.036     06:14:30	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:04:14.036     06:14:30	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:04:14.036     06:14:30	-- scripts/common.sh@367 -- # return 0
00:04:14.036     06:14:30	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:14.036     06:14:30	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:04:14.036  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:14.036  		--rc genhtml_branch_coverage=1
00:04:14.036  		--rc genhtml_function_coverage=1
00:04:14.036  		--rc genhtml_legend=1
00:04:14.036  		--rc geninfo_all_blocks=1
00:04:14.036  		--rc geninfo_unexecuted_blocks=1
00:04:14.036  		
00:04:14.036  		'
00:04:14.036     06:14:30	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:04:14.036  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:14.036  		--rc genhtml_branch_coverage=1
00:04:14.036  		--rc genhtml_function_coverage=1
00:04:14.036  		--rc genhtml_legend=1
00:04:14.036  		--rc geninfo_all_blocks=1
00:04:14.036  		--rc geninfo_unexecuted_blocks=1
00:04:14.036  		
00:04:14.036  		'
00:04:14.036     06:14:30	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:04:14.036  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:14.036  		--rc genhtml_branch_coverage=1
00:04:14.036  		--rc genhtml_function_coverage=1
00:04:14.036  		--rc genhtml_legend=1
00:04:14.036  		--rc geninfo_all_blocks=1
00:04:14.036  		--rc geninfo_unexecuted_blocks=1
00:04:14.036  		
00:04:14.036  		'
00:04:14.036     06:14:30	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:04:14.036  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:14.036  		--rc genhtml_branch_coverage=1
00:04:14.036  		--rc genhtml_function_coverage=1
00:04:14.036  		--rc genhtml_legend=1
00:04:14.036  		--rc geninfo_all_blocks=1
00:04:14.036  		--rc geninfo_unexecuted_blocks=1
00:04:14.036  		
00:04:14.036  		'
00:04:14.036   06:14:30	-- setup/devices.sh@190 -- # trap cleanup EXIT
00:04:14.036   06:14:30	-- setup/devices.sh@192 -- # setup reset
00:04:14.036   06:14:30	-- setup/common.sh@9 -- # [[ reset == output ]]
00:04:14.036   06:14:30	-- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:04:14.972   06:14:31	-- setup/devices.sh@194 -- # get_zoned_devs
00:04:14.972   06:14:31	-- common/autotest_common.sh@1664 -- # zoned_devs=()
00:04:14.972   06:14:31	-- common/autotest_common.sh@1664 -- # local -gA zoned_devs
00:04:14.972   06:14:31	-- common/autotest_common.sh@1665 -- # local nvme bdf
00:04:14.972   06:14:31	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:04:14.972   06:14:31	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1
00:04:14.972   06:14:31	-- common/autotest_common.sh@1657 -- # local device=nvme0n1
00:04:14.972   06:14:31	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:04:14.972   06:14:31	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:04:14.972   06:14:31	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:04:14.972   06:14:31	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1
00:04:14.972   06:14:31	-- common/autotest_common.sh@1657 -- # local device=nvme1n1
00:04:14.972   06:14:31	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]]
00:04:14.972   06:14:31	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:04:14.972   06:14:31	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:04:14.972   06:14:31	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2
00:04:14.973   06:14:31	-- common/autotest_common.sh@1657 -- # local device=nvme1n2
00:04:14.973   06:14:31	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]]
00:04:14.973   06:14:31	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:04:14.973   06:14:31	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:04:14.973   06:14:31	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3
00:04:14.973   06:14:31	-- common/autotest_common.sh@1657 -- # local device=nvme1n3
00:04:14.973   06:14:31	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]]
00:04:14.973   06:14:31	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:04:14.973   06:14:31	-- setup/devices.sh@196 -- # blocks=()
00:04:14.973   06:14:31	-- setup/devices.sh@196 -- # declare -a blocks
00:04:14.973   06:14:31	-- setup/devices.sh@197 -- # blocks_to_pci=()
00:04:14.973   06:14:31	-- setup/devices.sh@197 -- # declare -A blocks_to_pci
00:04:14.973   06:14:31	-- setup/devices.sh@198 -- # min_disk_size=3221225472
00:04:14.973   06:14:31	-- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*)
00:04:14.973   06:14:31	-- setup/devices.sh@201 -- # ctrl=nvme0n1
00:04:14.973   06:14:31	-- setup/devices.sh@201 -- # ctrl=nvme0
00:04:14.973   06:14:31	-- setup/devices.sh@202 -- # pci=0000:00:06.0
00:04:14.973   06:14:31	-- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]]
00:04:14.973   06:14:31	-- setup/devices.sh@204 -- # block_in_use nvme0n1
00:04:14.973   06:14:31	-- scripts/common.sh@380 -- # local block=nvme0n1 pt
00:04:14.973   06:14:31	-- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1
00:04:14.973  No valid GPT data, bailing
00:04:14.973    06:14:31	-- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:04:14.973   06:14:31	-- scripts/common.sh@393 -- # pt=
00:04:14.973   06:14:31	-- scripts/common.sh@394 -- # return 1
00:04:14.973    06:14:31	-- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1
00:04:14.973    06:14:31	-- setup/common.sh@76 -- # local dev=nvme0n1
00:04:14.973    06:14:31	-- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]]
00:04:14.973    06:14:31	-- setup/common.sh@80 -- # echo 5368709120
00:04:14.973   06:14:31	-- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size ))
00:04:14.973   06:14:31	-- setup/devices.sh@205 -- # blocks+=("${block##*/}")
00:04:14.973   06:14:31	-- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0
00:04:14.973   06:14:31	-- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*)
00:04:14.973   06:14:31	-- setup/devices.sh@201 -- # ctrl=nvme1n1
00:04:14.973   06:14:31	-- setup/devices.sh@201 -- # ctrl=nvme1
00:04:14.973   06:14:31	-- setup/devices.sh@202 -- # pci=0000:00:07.0
00:04:14.973   06:14:31	-- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]]
00:04:14.973   06:14:31	-- setup/devices.sh@204 -- # block_in_use nvme1n1
00:04:14.973   06:14:31	-- scripts/common.sh@380 -- # local block=nvme1n1 pt
00:04:14.973   06:14:31	-- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1
00:04:14.973  No valid GPT data, bailing
00:04:14.973    06:14:31	-- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1
00:04:14.973   06:14:31	-- scripts/common.sh@393 -- # pt=
00:04:14.973   06:14:31	-- scripts/common.sh@394 -- # return 1
00:04:14.973    06:14:31	-- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1
00:04:14.973    06:14:31	-- setup/common.sh@76 -- # local dev=nvme1n1
00:04:14.973    06:14:31	-- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]]
00:04:14.973    06:14:31	-- setup/common.sh@80 -- # echo 4294967296
00:04:14.973   06:14:31	-- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size ))
00:04:14.973   06:14:31	-- setup/devices.sh@205 -- # blocks+=("${block##*/}")
00:04:14.973   06:14:31	-- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0
00:04:14.973   06:14:31	-- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*)
00:04:14.973   06:14:31	-- setup/devices.sh@201 -- # ctrl=nvme1n2
00:04:14.973   06:14:31	-- setup/devices.sh@201 -- # ctrl=nvme1
00:04:14.973   06:14:31	-- setup/devices.sh@202 -- # pci=0000:00:07.0
00:04:14.973   06:14:31	-- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]]
00:04:14.973   06:14:31	-- setup/devices.sh@204 -- # block_in_use nvme1n2
00:04:14.973   06:14:31	-- scripts/common.sh@380 -- # local block=nvme1n2 pt
00:04:14.973   06:14:31	-- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2
00:04:14.973  No valid GPT data, bailing
00:04:14.973    06:14:31	-- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2
00:04:14.973   06:14:31	-- scripts/common.sh@393 -- # pt=
00:04:14.973   06:14:31	-- scripts/common.sh@394 -- # return 1
00:04:14.973    06:14:31	-- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2
00:04:14.973    06:14:31	-- setup/common.sh@76 -- # local dev=nvme1n2
00:04:14.973    06:14:31	-- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]]
00:04:14.973    06:14:31	-- setup/common.sh@80 -- # echo 4294967296
00:04:14.973   06:14:31	-- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size ))
00:04:14.973   06:14:31	-- setup/devices.sh@205 -- # blocks+=("${block##*/}")
00:04:14.973   06:14:31	-- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0
00:04:14.973   06:14:31	-- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*)
00:04:14.973   06:14:31	-- setup/devices.sh@201 -- # ctrl=nvme1n3
00:04:14.973   06:14:31	-- setup/devices.sh@201 -- # ctrl=nvme1
00:04:14.973   06:14:31	-- setup/devices.sh@202 -- # pci=0000:00:07.0
00:04:14.973   06:14:31	-- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]]
00:04:14.973   06:14:31	-- setup/devices.sh@204 -- # block_in_use nvme1n3
00:04:14.973   06:14:31	-- scripts/common.sh@380 -- # local block=nvme1n3 pt
00:04:14.973   06:14:31	-- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3
00:04:14.973  No valid GPT data, bailing
00:04:14.973    06:14:31	-- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3
00:04:14.973   06:14:31	-- scripts/common.sh@393 -- # pt=
00:04:14.973   06:14:31	-- scripts/common.sh@394 -- # return 1
00:04:14.973    06:14:31	-- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3
00:04:14.973    06:14:31	-- setup/common.sh@76 -- # local dev=nvme1n3
00:04:14.973    06:14:31	-- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]]
00:04:14.973    06:14:31	-- setup/common.sh@80 -- # echo 4294967296
00:04:14.973   06:14:31	-- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size ))
00:04:14.973   06:14:31	-- setup/devices.sh@205 -- # blocks+=("${block##*/}")
00:04:14.973   06:14:31	-- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0
00:04:14.973   06:14:31	-- setup/devices.sh@209 -- # (( 4 > 0 ))
00:04:14.973   06:14:31	-- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1
00:04:14.973   06:14:31	-- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount
00:04:14.973   06:14:31	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:14.973   06:14:31	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:14.973   06:14:31	-- common/autotest_common.sh@10 -- # set +x
00:04:14.973  ************************************
00:04:14.973  START TEST nvme_mount
00:04:14.973  ************************************
00:04:14.973   06:14:31	-- common/autotest_common.sh@1114 -- # nvme_mount
00:04:14.973   06:14:31	-- setup/devices.sh@95 -- # nvme_disk=nvme0n1
00:04:14.973   06:14:31	-- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1
00:04:14.973   06:14:31	-- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:14.973   06:14:31	-- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme
00:04:14.973   06:14:31	-- setup/devices.sh@101 -- # partition_drive nvme0n1 1
00:04:14.973   06:14:31	-- setup/common.sh@39 -- # local disk=nvme0n1
00:04:14.973   06:14:31	-- setup/common.sh@40 -- # local part_no=1
00:04:14.973   06:14:31	-- setup/common.sh@41 -- # local size=1073741824
00:04:14.973   06:14:31	-- setup/common.sh@43 -- # local part part_start=0 part_end=0
00:04:14.973   06:14:31	-- setup/common.sh@44 -- # parts=()
00:04:14.973   06:14:31	-- setup/common.sh@44 -- # local parts
00:04:14.973   06:14:31	-- setup/common.sh@46 -- # (( part = 1 ))
00:04:14.973   06:14:31	-- setup/common.sh@46 -- # (( part <= part_no ))
00:04:14.973   06:14:31	-- setup/common.sh@47 -- # parts+=("${disk}p$part")
00:04:14.973   06:14:31	-- setup/common.sh@46 -- # (( part++ ))
00:04:14.973   06:14:31	-- setup/common.sh@46 -- # (( part <= part_no ))
00:04:14.973   06:14:31	-- setup/common.sh@51 -- # (( size /= 4096 ))
00:04:14.973   06:14:31	-- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all
00:04:14.973   06:14:31	-- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1
00:04:16.350  Creating new GPT entries in memory.
00:04:16.350  GPT data structures destroyed! You may now partition the disk using fdisk or
00:04:16.350  other utilities.
00:04:16.350   06:14:32	-- setup/common.sh@57 -- # (( part = 1 ))
00:04:16.350   06:14:32	-- setup/common.sh@57 -- # (( part <= part_no ))
00:04:16.350   06:14:32	-- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 ))
00:04:16.350   06:14:32	-- setup/common.sh@59 -- # (( part_end = part_start + size - 1 ))
00:04:16.350   06:14:32	-- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191
00:04:17.292  Creating new GPT entries in memory.
00:04:17.292  The operation has completed successfully.
00:04:17.292   06:14:33	-- setup/common.sh@57 -- # (( part++ ))
00:04:17.292   06:14:33	-- setup/common.sh@57 -- # (( part <= part_no ))
00:04:17.292   06:14:33	-- setup/common.sh@62 -- # wait 53779
00:04:17.292   06:14:34	-- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:17.292   06:14:34	-- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=
00:04:17.292   06:14:34	-- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:17.292   06:14:34	-- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]]
00:04:17.292   06:14:34	-- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1
00:04:17.292   06:14:34	-- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:17.292   06:14:34	-- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme
00:04:17.292   06:14:34	-- setup/devices.sh@48 -- # local dev=0000:00:06.0
00:04:17.292   06:14:34	-- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1
00:04:17.292   06:14:34	-- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:17.292   06:14:34	-- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme
00:04:17.292   06:14:34	-- setup/devices.sh@53 -- # local found=0
00:04:17.292   06:14:34	-- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]]
00:04:17.292   06:14:34	-- setup/devices.sh@56 -- # :
00:04:17.292   06:14:34	-- setup/devices.sh@59 -- # local pci status
00:04:17.292   06:14:34	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:17.292    06:14:34	-- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0
00:04:17.292    06:14:34	-- setup/devices.sh@47 -- # setup output config
00:04:17.292    06:14:34	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:17.292    06:14:34	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config
00:04:17.292   06:14:34	-- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:17.292   06:14:34	-- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]]
00:04:17.292   06:14:34	-- setup/devices.sh@63 -- # found=1
00:04:17.292   06:14:34	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:17.292   06:14:34	-- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:17.292   06:14:34	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:17.859   06:14:34	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:17.859   06:14:34	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:17.859   06:14:34	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:17.859   06:14:34	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:17.859   06:14:34	-- setup/devices.sh@66 -- # (( found == 1 ))
00:04:17.859   06:14:34	-- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]]
00:04:17.859   06:14:34	-- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:17.859   06:14:34	-- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]]
00:04:17.859   06:14:34	-- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme
00:04:17.859   06:14:34	-- setup/devices.sh@110 -- # cleanup_nvme
00:04:17.859   06:14:34	-- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:17.859   06:14:34	-- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:17.859   06:14:34	-- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]]
00:04:17.859   06:14:34	-- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1
00:04:17.859  /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef
00:04:17.859   06:14:34	-- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]]
00:04:17.859   06:14:34	-- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1
00:04:18.118  /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54
00:04:18.118  /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54
00:04:18.118  /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
00:04:18.118  /dev/nvme0n1: calling ioctl to re-read partition table: Success
00:04:18.118   06:14:34	-- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M
00:04:18.118   06:14:34	-- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M
00:04:18.118   06:14:34	-- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:18.118   06:14:35	-- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]]
00:04:18.118   06:14:35	-- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M
00:04:18.118   06:14:35	-- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:18.118   06:14:35	-- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme
00:04:18.118   06:14:35	-- setup/devices.sh@48 -- # local dev=0000:00:06.0
00:04:18.118   06:14:35	-- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1
00:04:18.118   06:14:35	-- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:18.118   06:14:35	-- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme
00:04:18.118   06:14:35	-- setup/devices.sh@53 -- # local found=0
00:04:18.118   06:14:35	-- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]]
00:04:18.118   06:14:35	-- setup/devices.sh@56 -- # :
00:04:18.118   06:14:35	-- setup/devices.sh@59 -- # local pci status
00:04:18.118    06:14:35	-- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0
00:04:18.118   06:14:35	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:18.118    06:14:35	-- setup/devices.sh@47 -- # setup output config
00:04:18.118    06:14:35	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:18.118    06:14:35	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config
00:04:18.376   06:14:35	-- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:18.377   06:14:35	-- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]]
00:04:18.377   06:14:35	-- setup/devices.sh@63 -- # found=1
00:04:18.377   06:14:35	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:18.377   06:14:35	-- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:18.377   06:14:35	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:18.635   06:14:35	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:18.635   06:14:35	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:18.635   06:14:35	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:18.635   06:14:35	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:18.894   06:14:35	-- setup/devices.sh@66 -- # (( found == 1 ))
00:04:18.894   06:14:35	-- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]]
00:04:18.894   06:14:35	-- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:18.894   06:14:35	-- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]]
00:04:18.894   06:14:35	-- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme
00:04:18.894   06:14:35	-- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:18.894   06:14:35	-- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' ''
00:04:18.894   06:14:35	-- setup/devices.sh@48 -- # local dev=0000:00:06.0
00:04:18.894   06:14:35	-- setup/devices.sh@49 -- # local mounts=data@nvme0n1
00:04:18.894   06:14:35	-- setup/devices.sh@50 -- # local mount_point=
00:04:18.894   06:14:35	-- setup/devices.sh@51 -- # local test_file=
00:04:18.894   06:14:35	-- setup/devices.sh@53 -- # local found=0
00:04:18.894   06:14:35	-- setup/devices.sh@55 -- # [[ -n '' ]]
00:04:18.894   06:14:35	-- setup/devices.sh@59 -- # local pci status
00:04:18.894   06:14:35	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:18.894    06:14:35	-- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0
00:04:18.894    06:14:35	-- setup/devices.sh@47 -- # setup output config
00:04:18.894    06:14:35	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:18.894    06:14:35	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config
00:04:19.152   06:14:35	-- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:19.152   06:14:35	-- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]]
00:04:19.152   06:14:35	-- setup/devices.sh@63 -- # found=1
00:04:19.152   06:14:35	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:19.152   06:14:35	-- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:19.152   06:14:35	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:19.411   06:14:36	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:19.411   06:14:36	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:19.411   06:14:36	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:19.411   06:14:36	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:19.411   06:14:36	-- setup/devices.sh@66 -- # (( found == 1 ))
00:04:19.411   06:14:36	-- setup/devices.sh@68 -- # [[ -n '' ]]
00:04:19.411   06:14:36	-- setup/devices.sh@68 -- # return 0
00:04:19.411   06:14:36	-- setup/devices.sh@128 -- # cleanup_nvme
00:04:19.411   06:14:36	-- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:19.411   06:14:36	-- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]]
00:04:19.411   06:14:36	-- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]]
00:04:19.411   06:14:36	-- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1
00:04:19.670  /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef
00:04:19.670  
00:04:19.670  real	0m4.453s
00:04:19.670  user	0m1.066s
00:04:19.670  sys	0m1.087s
00:04:19.670   06:14:36	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:19.670   06:14:36	-- common/autotest_common.sh@10 -- # set +x
00:04:19.670  ************************************
00:04:19.670  END TEST nvme_mount
00:04:19.670  ************************************
00:04:19.670   06:14:36	-- setup/devices.sh@214 -- # run_test dm_mount dm_mount
00:04:19.670   06:14:36	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:19.670   06:14:36	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:19.670   06:14:36	-- common/autotest_common.sh@10 -- # set +x
00:04:19.670  ************************************
00:04:19.670  START TEST dm_mount
00:04:19.670  ************************************
00:04:19.670   06:14:36	-- common/autotest_common.sh@1114 -- # dm_mount
00:04:19.670   06:14:36	-- setup/devices.sh@144 -- # pv=nvme0n1
00:04:19.670   06:14:36	-- setup/devices.sh@145 -- # pv0=nvme0n1p1
00:04:19.670   06:14:36	-- setup/devices.sh@146 -- # pv1=nvme0n1p2
00:04:19.670   06:14:36	-- setup/devices.sh@148 -- # partition_drive nvme0n1
00:04:19.670   06:14:36	-- setup/common.sh@39 -- # local disk=nvme0n1
00:04:19.670   06:14:36	-- setup/common.sh@40 -- # local part_no=2
00:04:19.670   06:14:36	-- setup/common.sh@41 -- # local size=1073741824
00:04:19.670   06:14:36	-- setup/common.sh@43 -- # local part part_start=0 part_end=0
00:04:19.670   06:14:36	-- setup/common.sh@44 -- # parts=()
00:04:19.670   06:14:36	-- setup/common.sh@44 -- # local parts
00:04:19.670   06:14:36	-- setup/common.sh@46 -- # (( part = 1 ))
00:04:19.670   06:14:36	-- setup/common.sh@46 -- # (( part <= part_no ))
00:04:19.670   06:14:36	-- setup/common.sh@47 -- # parts+=("${disk}p$part")
00:04:19.670   06:14:36	-- setup/common.sh@46 -- # (( part++ ))
00:04:19.670   06:14:36	-- setup/common.sh@46 -- # (( part <= part_no ))
00:04:19.670   06:14:36	-- setup/common.sh@47 -- # parts+=("${disk}p$part")
00:04:19.670   06:14:36	-- setup/common.sh@46 -- # (( part++ ))
00:04:19.670   06:14:36	-- setup/common.sh@46 -- # (( part <= part_no ))
00:04:19.670   06:14:36	-- setup/common.sh@51 -- # (( size /= 4096 ))
00:04:19.670   06:14:36	-- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all
00:04:19.670   06:14:36	-- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2
00:04:20.605  Creating new GPT entries in memory.
00:04:20.605  GPT data structures destroyed! You may now partition the disk using fdisk or
00:04:20.605  other utilities.
00:04:20.605   06:14:37	-- setup/common.sh@57 -- # (( part = 1 ))
00:04:20.605   06:14:37	-- setup/common.sh@57 -- # (( part <= part_no ))
00:04:20.605   06:14:37	-- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 ))
00:04:20.605   06:14:37	-- setup/common.sh@59 -- # (( part_end = part_start + size - 1 ))
00:04:20.605   06:14:37	-- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191
00:04:21.540  Creating new GPT entries in memory.
00:04:21.540  The operation has completed successfully.
00:04:21.540   06:14:38	-- setup/common.sh@57 -- # (( part++ ))
00:04:21.540   06:14:38	-- setup/common.sh@57 -- # (( part <= part_no ))
00:04:21.540   06:14:38	-- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 ))
00:04:21.540   06:14:38	-- setup/common.sh@59 -- # (( part_end = part_start + size - 1 ))
00:04:21.540   06:14:38	-- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335
00:04:22.917  The operation has completed successfully.
00:04:22.917   06:14:39	-- setup/common.sh@57 -- # (( part++ ))
00:04:22.917   06:14:39	-- setup/common.sh@57 -- # (( part <= part_no ))
00:04:22.917   06:14:39	-- setup/common.sh@62 -- # wait 54234
00:04:22.917   06:14:39	-- setup/devices.sh@150 -- # dm_name=nvme_dm_test
00:04:22.917   06:14:39	-- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:04:22.917   06:14:39	-- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm
00:04:22.917   06:14:39	-- setup/devices.sh@155 -- # dmsetup create nvme_dm_test
00:04:22.917   06:14:39	-- setup/devices.sh@160 -- # for t in {1..5}
00:04:22.917   06:14:39	-- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]]
00:04:22.917   06:14:39	-- setup/devices.sh@161 -- # break
00:04:22.917   06:14:39	-- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]]
00:04:22.917    06:14:39	-- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test
00:04:22.917   06:14:39	-- setup/devices.sh@165 -- # dm=/dev/dm-0
00:04:22.917   06:14:39	-- setup/devices.sh@166 -- # dm=dm-0
00:04:22.917   06:14:39	-- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]]
00:04:22.917   06:14:39	-- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]]
00:04:22.917   06:14:39	-- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:04:22.917   06:14:39	-- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size=
00:04:22.917   06:14:39	-- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:04:22.917   06:14:39	-- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]]
00:04:22.917   06:14:39	-- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test
00:04:22.917   06:14:39	-- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:04:22.917   06:14:39	-- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm
00:04:22.917   06:14:39	-- setup/devices.sh@48 -- # local dev=0000:00:06.0
00:04:22.917   06:14:39	-- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test
00:04:22.917   06:14:39	-- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:04:22.917   06:14:39	-- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm
00:04:22.917   06:14:39	-- setup/devices.sh@53 -- # local found=0
00:04:22.917   06:14:39	-- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]]
00:04:22.917   06:14:39	-- setup/devices.sh@56 -- # :
00:04:22.917   06:14:39	-- setup/devices.sh@59 -- # local pci status
00:04:22.917   06:14:39	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:22.917    06:14:39	-- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0
00:04:22.917    06:14:39	-- setup/devices.sh@47 -- # setup output config
00:04:22.917    06:14:39	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:22.917    06:14:39	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config
00:04:22.917   06:14:39	-- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:22.917   06:14:39	-- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]]
00:04:22.917   06:14:39	-- setup/devices.sh@63 -- # found=1
00:04:22.917   06:14:39	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:22.917   06:14:39	-- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:22.917   06:14:39	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:23.176   06:14:40	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:23.176   06:14:40	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:23.176   06:14:40	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:23.176   06:14:40	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:23.434   06:14:40	-- setup/devices.sh@66 -- # (( found == 1 ))
00:04:23.434   06:14:40	-- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]]
00:04:23.434   06:14:40	-- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:04:23.434   06:14:40	-- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]]
00:04:23.434   06:14:40	-- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm
00:04:23.434   06:14:40	-- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:04:23.434   06:14:40	-- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' ''
00:04:23.434   06:14:40	-- setup/devices.sh@48 -- # local dev=0000:00:06.0
00:04:23.434   06:14:40	-- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0
00:04:23.434   06:14:40	-- setup/devices.sh@50 -- # local mount_point=
00:04:23.434   06:14:40	-- setup/devices.sh@51 -- # local test_file=
00:04:23.434   06:14:40	-- setup/devices.sh@53 -- # local found=0
00:04:23.434   06:14:40	-- setup/devices.sh@55 -- # [[ -n '' ]]
00:04:23.434   06:14:40	-- setup/devices.sh@59 -- # local pci status
00:04:23.434   06:14:40	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:23.434    06:14:40	-- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0
00:04:23.434    06:14:40	-- setup/devices.sh@47 -- # setup output config
00:04:23.434    06:14:40	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:23.434    06:14:40	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config
00:04:23.434   06:14:40	-- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:23.434   06:14:40	-- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]]
00:04:23.434   06:14:40	-- setup/devices.sh@63 -- # found=1
00:04:23.434   06:14:40	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:23.693   06:14:40	-- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:23.693   06:14:40	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:23.951   06:14:40	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:23.951   06:14:40	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:23.951   06:14:40	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:23.951   06:14:40	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:23.951   06:14:40	-- setup/devices.sh@66 -- # (( found == 1 ))
00:04:23.951   06:14:40	-- setup/devices.sh@68 -- # [[ -n '' ]]
00:04:23.952   06:14:40	-- setup/devices.sh@68 -- # return 0
00:04:23.952   06:14:40	-- setup/devices.sh@187 -- # cleanup_dm
00:04:23.952   06:14:40	-- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:04:23.952   06:14:40	-- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]]
00:04:23.952   06:14:40	-- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test
00:04:23.952   06:14:40	-- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]]
00:04:23.952   06:14:40	-- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1
00:04:23.952  /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef
00:04:23.952   06:14:40	-- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]]
00:04:23.952   06:14:40	-- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2
00:04:23.952  
00:04:23.952  real	0m4.481s
00:04:23.952  user	0m0.659s
00:04:23.952  sys	0m0.760s
00:04:23.952   06:14:40	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:23.952   06:14:40	-- common/autotest_common.sh@10 -- # set +x
00:04:23.952  ************************************
00:04:23.952  END TEST dm_mount
00:04:23.952  ************************************
00:04:24.210   06:14:40	-- setup/devices.sh@1 -- # cleanup
00:04:24.210   06:14:40	-- setup/devices.sh@11 -- # cleanup_nvme
00:04:24.210   06:14:40	-- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:24.210   06:14:40	-- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]]
00:04:24.210   06:14:40	-- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1
00:04:24.210   06:14:40	-- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]]
00:04:24.210   06:14:40	-- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1
00:04:24.469  /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54
00:04:24.469  /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54
00:04:24.469  /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
00:04:24.469  /dev/nvme0n1: calling ioctl to re-read partition table: Success
00:04:24.469   06:14:41	-- setup/devices.sh@12 -- # cleanup_dm
00:04:24.469   06:14:41	-- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:04:24.469   06:14:41	-- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]]
00:04:24.469   06:14:41	-- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]]
00:04:24.469   06:14:41	-- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]]
00:04:24.469   06:14:41	-- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]]
00:04:24.469   06:14:41	-- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1
00:04:24.469  
00:04:24.469  real	0m10.521s
00:04:24.469  user	0m2.450s
00:04:24.469  sys	0m2.421s
00:04:24.469   06:14:41	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:24.469   06:14:41	-- common/autotest_common.sh@10 -- # set +x
00:04:24.469  ************************************
00:04:24.469  END TEST devices
00:04:24.469  ************************************
00:04:24.469  
00:04:24.469  real	0m22.192s
00:04:24.469  user	0m7.727s
00:04:24.469  sys	0m8.808s
00:04:24.469  ************************************
00:04:24.469  END TEST setup.sh
00:04:24.469  ************************************
00:04:24.469   06:14:41	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:24.469   06:14:41	-- common/autotest_common.sh@10 -- # set +x
00:04:24.469   06:14:41	-- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:04:24.727  Hugepages
00:04:24.727  node     hugesize     free /  total
00:04:24.727  node0   1048576kB        0 /      0
00:04:24.727  node0      2048kB     2048 /   2048
00:04:24.727  
00:04:24.727  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:04:24.727  virtio                    0000:00:03.0    1af4   1001   unknown virtio-pci       -          vda
00:04:24.727  NVMe                      0000:00:06.0    1b36   0010   unknown nvme             nvme0      nvme0n1
00:04:24.727  NVMe                      0000:00:07.0    1b36   0010   unknown nvme             nvme1      nvme1n1 nvme1n2 nvme1n3
00:04:24.727    06:14:41	-- spdk/autotest.sh@128 -- # uname -s
00:04:24.728   06:14:41	-- spdk/autotest.sh@128 -- # [[ Linux == Linux ]]
00:04:24.728   06:14:41	-- spdk/autotest.sh@130 -- # nvme_namespace_revert
00:04:24.728   06:14:41	-- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:04:25.663  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:04:25.663  0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic
00:04:25.663  0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic
00:04:25.663   06:14:42	-- common/autotest_common.sh@1527 -- # sleep 1
00:04:26.599   06:14:43	-- common/autotest_common.sh@1528 -- # bdfs=()
00:04:26.599   06:14:43	-- common/autotest_common.sh@1528 -- # local bdfs
00:04:26.599   06:14:43	-- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs))
00:04:26.599    06:14:43	-- common/autotest_common.sh@1529 -- # get_nvme_bdfs
00:04:26.599    06:14:43	-- common/autotest_common.sh@1508 -- # bdfs=()
00:04:26.599    06:14:43	-- common/autotest_common.sh@1508 -- # local bdfs
00:04:26.599    06:14:43	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:04:26.599     06:14:43	-- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:04:26.599     06:14:43	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:04:26.972    06:14:43	-- common/autotest_common.sh@1510 -- # (( 2 == 0 ))
00:04:26.972    06:14:43	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0
00:04:26.972   06:14:43	-- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:04:27.250  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:04:27.250  Waiting for block devices as requested
00:04:27.250  0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme
00:04:27.250  0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme
00:04:27.250   06:14:44	-- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}"
00:04:27.250    06:14:44	-- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0
00:04:27.250     06:14:44	-- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1
00:04:27.250     06:14:44	-- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme
00:04:27.250    06:14:44	-- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0
00:04:27.250    06:14:44	-- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]]
00:04:27.250     06:14:44	-- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0
00:04:27.250    06:14:44	-- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0
00:04:27.250   06:14:44	-- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0
00:04:27.250   06:14:44	-- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]]
00:04:27.250    06:14:44	-- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0
00:04:27.250    06:14:44	-- common/autotest_common.sh@1540 -- # grep oacs
00:04:27.250    06:14:44	-- common/autotest_common.sh@1540 -- # cut -d: -f2
00:04:27.250   06:14:44	-- common/autotest_common.sh@1540 -- # oacs=' 0x12a'
00:04:27.250   06:14:44	-- common/autotest_common.sh@1541 -- # oacs_ns_manage=8
00:04:27.250   06:14:44	-- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]]
00:04:27.250    06:14:44	-- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0
00:04:27.250    06:14:44	-- common/autotest_common.sh@1549 -- # grep unvmcap
00:04:27.250    06:14:44	-- common/autotest_common.sh@1549 -- # cut -d: -f2
00:04:27.250   06:14:44	-- common/autotest_common.sh@1549 -- # unvmcap=' 0'
00:04:27.250   06:14:44	-- common/autotest_common.sh@1550 -- # [[  0 -eq 0 ]]
00:04:27.250   06:14:44	-- common/autotest_common.sh@1552 -- # continue
00:04:27.250   06:14:44	-- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}"
00:04:27.250    06:14:44	-- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0
00:04:27.250     06:14:44	-- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1
00:04:27.250     06:14:44	-- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme
00:04:27.250    06:14:44	-- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1
00:04:27.250    06:14:44	-- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]]
00:04:27.250     06:14:44	-- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1
00:04:27.250    06:14:44	-- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1
00:04:27.250   06:14:44	-- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1
00:04:27.250   06:14:44	-- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]]
00:04:27.250    06:14:44	-- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1
00:04:27.250    06:14:44	-- common/autotest_common.sh@1540 -- # grep oacs
00:04:27.250    06:14:44	-- common/autotest_common.sh@1540 -- # cut -d: -f2
00:04:27.250   06:14:44	-- common/autotest_common.sh@1540 -- # oacs=' 0x12a'
00:04:27.250   06:14:44	-- common/autotest_common.sh@1541 -- # oacs_ns_manage=8
00:04:27.250   06:14:44	-- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]]
00:04:27.250    06:14:44	-- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1
00:04:27.250    06:14:44	-- common/autotest_common.sh@1549 -- # grep unvmcap
00:04:27.250    06:14:44	-- common/autotest_common.sh@1549 -- # cut -d: -f2
00:04:27.523   06:14:44	-- common/autotest_common.sh@1549 -- # unvmcap=' 0'
00:04:27.523   06:14:44	-- common/autotest_common.sh@1550 -- # [[  0 -eq 0 ]]
00:04:27.523   06:14:44	-- common/autotest_common.sh@1552 -- # continue
00:04:27.523   06:14:44	-- spdk/autotest.sh@133 -- # timing_exit pre_cleanup
00:04:27.523   06:14:44	-- common/autotest_common.sh@728 -- # xtrace_disable
00:04:27.523   06:14:44	-- common/autotest_common.sh@10 -- # set +x
00:04:27.523   06:14:44	-- spdk/autotest.sh@136 -- # timing_enter afterboot
00:04:27.523   06:14:44	-- common/autotest_common.sh@722 -- # xtrace_disable
00:04:27.523   06:14:44	-- common/autotest_common.sh@10 -- # set +x
00:04:27.523   06:14:44	-- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:04:28.090  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:04:28.090  0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic
00:04:28.090  0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic
00:04:28.349   06:14:45	-- spdk/autotest.sh@138 -- # timing_exit afterboot
00:04:28.349   06:14:45	-- common/autotest_common.sh@728 -- # xtrace_disable
00:04:28.349   06:14:45	-- common/autotest_common.sh@10 -- # set +x
00:04:28.349   06:14:45	-- spdk/autotest.sh@142 -- # opal_revert_cleanup
00:04:28.349   06:14:45	-- common/autotest_common.sh@1586 -- # mapfile -t bdfs
00:04:28.349    06:14:45	-- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54
00:04:28.349    06:14:45	-- common/autotest_common.sh@1572 -- # bdfs=()
00:04:28.349    06:14:45	-- common/autotest_common.sh@1572 -- # local bdfs
00:04:28.349     06:14:45	-- common/autotest_common.sh@1574 -- # get_nvme_bdfs
00:04:28.349     06:14:45	-- common/autotest_common.sh@1508 -- # bdfs=()
00:04:28.349     06:14:45	-- common/autotest_common.sh@1508 -- # local bdfs
00:04:28.349     06:14:45	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:04:28.349      06:14:45	-- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:04:28.349      06:14:45	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:04:28.349     06:14:45	-- common/autotest_common.sh@1510 -- # (( 2 == 0 ))
00:04:28.349     06:14:45	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0
00:04:28.350    06:14:45	-- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs)
00:04:28.350     06:14:45	-- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device
00:04:28.350    06:14:45	-- common/autotest_common.sh@1575 -- # device=0x0010
00:04:28.350    06:14:45	-- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]]
00:04:28.350    06:14:45	-- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs)
00:04:28.350     06:14:45	-- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device
00:04:28.350    06:14:45	-- common/autotest_common.sh@1575 -- # device=0x0010
00:04:28.350    06:14:45	-- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]]
00:04:28.350    06:14:45	-- common/autotest_common.sh@1581 -- # printf '%s\n'
00:04:28.350   06:14:45	-- common/autotest_common.sh@1587 -- # [[ -z '' ]]
00:04:28.350   06:14:45	-- common/autotest_common.sh@1588 -- # return 0
00:04:28.350   06:14:45	-- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']'
00:04:28.350   06:14:45	-- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']'
00:04:28.350   06:14:45	-- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]]
00:04:28.350   06:14:45	-- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]]
00:04:28.350   06:14:45	-- spdk/autotest.sh@160 -- # timing_enter lib
00:04:28.350   06:14:45	-- common/autotest_common.sh@722 -- # xtrace_disable
00:04:28.350   06:14:45	-- common/autotest_common.sh@10 -- # set +x
00:04:28.350   06:14:45	-- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh
00:04:28.350   06:14:45	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:28.350   06:14:45	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:28.350   06:14:45	-- common/autotest_common.sh@10 -- # set +x
00:04:28.350  ************************************
00:04:28.350  START TEST env
00:04:28.350  ************************************
00:04:28.350   06:14:45	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh
00:04:28.350  * Looking for test storage...
00:04:28.350  * Found test storage at /home/vagrant/spdk_repo/spdk/test/env
00:04:28.350    06:14:45	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:04:28.350     06:14:45	-- common/autotest_common.sh@1690 -- # lcov --version
00:04:28.350     06:14:45	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:04:28.609    06:14:45	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:04:28.609    06:14:45	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:04:28.609    06:14:45	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:04:28.609    06:14:45	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:04:28.609    06:14:45	-- scripts/common.sh@335 -- # IFS=.-:
00:04:28.609    06:14:45	-- scripts/common.sh@335 -- # read -ra ver1
00:04:28.609    06:14:45	-- scripts/common.sh@336 -- # IFS=.-:
00:04:28.609    06:14:45	-- scripts/common.sh@336 -- # read -ra ver2
00:04:28.609    06:14:45	-- scripts/common.sh@337 -- # local 'op=<'
00:04:28.609    06:14:45	-- scripts/common.sh@339 -- # ver1_l=2
00:04:28.609    06:14:45	-- scripts/common.sh@340 -- # ver2_l=1
00:04:28.609    06:14:45	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:04:28.609    06:14:45	-- scripts/common.sh@343 -- # case "$op" in
00:04:28.609    06:14:45	-- scripts/common.sh@344 -- # : 1
00:04:28.609    06:14:45	-- scripts/common.sh@363 -- # (( v = 0 ))
00:04:28.609    06:14:45	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:28.609     06:14:45	-- scripts/common.sh@364 -- # decimal 1
00:04:28.609     06:14:45	-- scripts/common.sh@352 -- # local d=1
00:04:28.609     06:14:45	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:28.609     06:14:45	-- scripts/common.sh@354 -- # echo 1
00:04:28.609    06:14:45	-- scripts/common.sh@364 -- # ver1[v]=1
00:04:28.609     06:14:45	-- scripts/common.sh@365 -- # decimal 2
00:04:28.609     06:14:45	-- scripts/common.sh@352 -- # local d=2
00:04:28.609     06:14:45	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:28.609     06:14:45	-- scripts/common.sh@354 -- # echo 2
00:04:28.609    06:14:45	-- scripts/common.sh@365 -- # ver2[v]=2
00:04:28.609    06:14:45	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:04:28.609    06:14:45	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:04:28.609    06:14:45	-- scripts/common.sh@367 -- # return 0
00:04:28.609    06:14:45	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:28.609    06:14:45	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:04:28.609  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:28.609  		--rc genhtml_branch_coverage=1
00:04:28.609  		--rc genhtml_function_coverage=1
00:04:28.609  		--rc genhtml_legend=1
00:04:28.609  		--rc geninfo_all_blocks=1
00:04:28.609  		--rc geninfo_unexecuted_blocks=1
00:04:28.609  		
00:04:28.609  		'
00:04:28.609    06:14:45	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:04:28.609  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:28.609  		--rc genhtml_branch_coverage=1
00:04:28.609  		--rc genhtml_function_coverage=1
00:04:28.609  		--rc genhtml_legend=1
00:04:28.609  		--rc geninfo_all_blocks=1
00:04:28.609  		--rc geninfo_unexecuted_blocks=1
00:04:28.609  		
00:04:28.609  		'
00:04:28.609    06:14:45	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:04:28.609  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:28.609  		--rc genhtml_branch_coverage=1
00:04:28.609  		--rc genhtml_function_coverage=1
00:04:28.609  		--rc genhtml_legend=1
00:04:28.609  		--rc geninfo_all_blocks=1
00:04:28.609  		--rc geninfo_unexecuted_blocks=1
00:04:28.609  		
00:04:28.609  		'
00:04:28.609    06:14:45	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:04:28.609  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:28.609  		--rc genhtml_branch_coverage=1
00:04:28.609  		--rc genhtml_function_coverage=1
00:04:28.609  		--rc genhtml_legend=1
00:04:28.609  		--rc geninfo_all_blocks=1
00:04:28.609  		--rc geninfo_unexecuted_blocks=1
00:04:28.609  		
00:04:28.609  		'
00:04:28.609   06:14:45	-- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut
00:04:28.609   06:14:45	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:28.609   06:14:45	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:28.609   06:14:45	-- common/autotest_common.sh@10 -- # set +x
00:04:28.609  ************************************
00:04:28.609  START TEST env_memory
00:04:28.609  ************************************
00:04:28.609   06:14:45	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut
00:04:28.609  
00:04:28.609  
00:04:28.610       CUnit - A unit testing framework for C - Version 2.1-3
00:04:28.610       http://cunit.sourceforge.net/
00:04:28.610  
00:04:28.610  
00:04:28.610  Suite: memory
00:04:28.610    Test: alloc and free memory map ...[2024-12-16 06:14:45.451589] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed
00:04:28.610  passed
00:04:28.610    Test: mem map translation ...[2024-12-16 06:14:45.482807] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234
00:04:28.610  [2024-12-16 06:14:45.482847] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152
00:04:28.610  [2024-12-16 06:14:45.482902] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656
00:04:28.610  [2024-12-16 06:14:45.482913] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map
00:04:28.610  passed
00:04:28.610    Test: mem map registration ...[2024-12-16 06:14:45.546960] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234
00:04:28.610  [2024-12-16 06:14:45.546995] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152
00:04:28.610  passed
00:04:28.868    Test: mem map adjacent registrations ...passed
00:04:28.868  
00:04:28.868  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:04:28.868                suites      1      1    n/a      0        0
00:04:28.868                 tests      4      4      4      0        0
00:04:28.868               asserts    152    152    152      0      n/a
00:04:28.868  
00:04:28.868  Elapsed time =    0.213 seconds
00:04:28.868  
00:04:28.868  real	0m0.231s
00:04:28.868  user	0m0.212s
00:04:28.868  sys	0m0.013s
00:04:28.868   06:14:45	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:28.868   06:14:45	-- common/autotest_common.sh@10 -- # set +x
00:04:28.868  ************************************
00:04:28.868  END TEST env_memory
00:04:28.868  ************************************
00:04:28.868   06:14:45	-- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys
00:04:28.868   06:14:45	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:28.868   06:14:45	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:28.868   06:14:45	-- common/autotest_common.sh@10 -- # set +x
00:04:28.868  ************************************
00:04:28.868  START TEST env_vtophys
00:04:28.868  ************************************
00:04:28.868   06:14:45	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys
00:04:28.868  EAL: lib.eal log level changed from notice to debug
00:04:28.868  EAL: Detected lcore 0 as core 0 on socket 0
00:04:28.868  EAL: Detected lcore 1 as core 0 on socket 0
00:04:28.868  EAL: Detected lcore 2 as core 0 on socket 0
00:04:28.868  EAL: Detected lcore 3 as core 0 on socket 0
00:04:28.868  EAL: Detected lcore 4 as core 0 on socket 0
00:04:28.868  EAL: Detected lcore 5 as core 0 on socket 0
00:04:28.868  EAL: Detected lcore 6 as core 0 on socket 0
00:04:28.868  EAL: Detected lcore 7 as core 0 on socket 0
00:04:28.868  EAL: Detected lcore 8 as core 0 on socket 0
00:04:28.868  EAL: Detected lcore 9 as core 0 on socket 0
00:04:28.868  EAL: Maximum logical cores by configuration: 128
00:04:28.868  EAL: Detected CPU lcores: 10
00:04:28.868  EAL: Detected NUMA nodes: 1
00:04:28.869  EAL: Checking presence of .so 'librte_eal.so.24.0'
00:04:28.869  EAL: Detected shared linkage of DPDK
00:04:28.869  EAL: No shared files mode enabled, IPC will be disabled
00:04:28.869  EAL: Selected IOVA mode 'PA'
00:04:28.869  EAL: Probing VFIO support...
00:04:28.869  EAL: Module /sys/module/vfio not found! error 2 (No such file or directory)
00:04:28.869  EAL: VFIO modules not loaded, skipping VFIO support...
00:04:28.869  EAL: Ask a virtual area of 0x2e000 bytes
00:04:28.869  EAL: Virtual area found at 0x200000000000 (size = 0x2e000)
00:04:28.869  EAL: Setting up physically contiguous memory...
00:04:28.869  EAL: Setting maximum number of open files to 524288
00:04:28.869  EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
00:04:28.869  EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152
00:04:28.869  EAL: Ask a virtual area of 0x61000 bytes
00:04:28.869  EAL: Virtual area found at 0x20000002e000 (size = 0x61000)
00:04:28.869  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:04:28.869  EAL: Ask a virtual area of 0x400000000 bytes
00:04:28.869  EAL: Virtual area found at 0x200000200000 (size = 0x400000000)
00:04:28.869  EAL: VA reserved for memseg list at 0x200000200000, size 400000000
00:04:28.869  EAL: Ask a virtual area of 0x61000 bytes
00:04:28.869  EAL: Virtual area found at 0x200400200000 (size = 0x61000)
00:04:28.869  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:04:28.869  EAL: Ask a virtual area of 0x400000000 bytes
00:04:28.869  EAL: Virtual area found at 0x200400400000 (size = 0x400000000)
00:04:28.869  EAL: VA reserved for memseg list at 0x200400400000, size 400000000
00:04:28.869  EAL: Ask a virtual area of 0x61000 bytes
00:04:28.869  EAL: Virtual area found at 0x200800400000 (size = 0x61000)
00:04:28.869  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:04:28.869  EAL: Ask a virtual area of 0x400000000 bytes
00:04:28.869  EAL: Virtual area found at 0x200800600000 (size = 0x400000000)
00:04:28.869  EAL: VA reserved for memseg list at 0x200800600000, size 400000000
00:04:28.869  EAL: Ask a virtual area of 0x61000 bytes
00:04:28.869  EAL: Virtual area found at 0x200c00600000 (size = 0x61000)
00:04:28.869  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:04:28.869  EAL: Ask a virtual area of 0x400000000 bytes
00:04:28.869  EAL: Virtual area found at 0x200c00800000 (size = 0x400000000)
00:04:28.869  EAL: VA reserved for memseg list at 0x200c00800000, size 400000000
00:04:28.869  EAL: Hugepages will be freed exactly as allocated.
00:04:28.869  EAL: No shared files mode enabled, IPC is disabled
00:04:28.869  EAL: No shared files mode enabled, IPC is disabled
00:04:28.869  EAL: TSC frequency is ~2200000 KHz
00:04:28.869  EAL: Main lcore 0 is ready (tid=7f15c0df9a00;cpuset=[0])
00:04:28.869  EAL: Trying to obtain current memory policy.
00:04:28.869  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:28.869  EAL: Restoring previous memory policy: 0
00:04:28.869  EAL: request: mp_malloc_sync
00:04:28.869  EAL: No shared files mode enabled, IPC is disabled
00:04:28.869  EAL: Heap on socket 0 was expanded by 2MB
00:04:28.869  EAL: Module /sys/module/vfio not found! error 2 (No such file or directory)
00:04:28.869  EAL: No PCI address specified using 'addr=<id>' in: bus=pci
00:04:28.869  EAL: Mem event callback 'spdk:(nil)' registered
00:04:28.869  EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory)
00:04:29.128  
00:04:29.128  
00:04:29.128       CUnit - A unit testing framework for C - Version 2.1-3
00:04:29.128       http://cunit.sourceforge.net/
00:04:29.128  
00:04:29.128  
00:04:29.128  Suite: components_suite
00:04:29.128    Test: vtophys_malloc_test ...passed
00:04:29.128    Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy.
00:04:29.128  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:29.128  EAL: Restoring previous memory policy: 4
00:04:29.128  EAL: Calling mem event callback 'spdk:(nil)'
00:04:29.128  EAL: request: mp_malloc_sync
00:04:29.128  EAL: No shared files mode enabled, IPC is disabled
00:04:29.128  EAL: Heap on socket 0 was expanded by 4MB
00:04:29.128  EAL: Calling mem event callback 'spdk:(nil)'
00:04:29.128  EAL: request: mp_malloc_sync
00:04:29.128  EAL: No shared files mode enabled, IPC is disabled
00:04:29.128  EAL: Heap on socket 0 was shrunk by 4MB
00:04:29.128  EAL: Trying to obtain current memory policy.
00:04:29.128  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:29.128  EAL: Restoring previous memory policy: 4
00:04:29.128  EAL: Calling mem event callback 'spdk:(nil)'
00:04:29.128  EAL: request: mp_malloc_sync
00:04:29.128  EAL: No shared files mode enabled, IPC is disabled
00:04:29.128  EAL: Heap on socket 0 was expanded by 6MB
00:04:29.128  EAL: Calling mem event callback 'spdk:(nil)'
00:04:29.128  EAL: request: mp_malloc_sync
00:04:29.128  EAL: No shared files mode enabled, IPC is disabled
00:04:29.128  EAL: Heap on socket 0 was shrunk by 6MB
00:04:29.128  EAL: Trying to obtain current memory policy.
00:04:29.128  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:29.128  EAL: Restoring previous memory policy: 4
00:04:29.128  EAL: Calling mem event callback 'spdk:(nil)'
00:04:29.128  EAL: request: mp_malloc_sync
00:04:29.128  EAL: No shared files mode enabled, IPC is disabled
00:04:29.128  EAL: Heap on socket 0 was expanded by 10MB
00:04:29.128  EAL: Calling mem event callback 'spdk:(nil)'
00:04:29.128  EAL: request: mp_malloc_sync
00:04:29.128  EAL: No shared files mode enabled, IPC is disabled
00:04:29.128  EAL: Heap on socket 0 was shrunk by 10MB
00:04:29.128  EAL: Trying to obtain current memory policy.
00:04:29.128  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:29.128  EAL: Restoring previous memory policy: 4
00:04:29.128  EAL: Calling mem event callback 'spdk:(nil)'
00:04:29.128  EAL: request: mp_malloc_sync
00:04:29.128  EAL: No shared files mode enabled, IPC is disabled
00:04:29.128  EAL: Heap on socket 0 was expanded by 18MB
00:04:29.128  EAL: Calling mem event callback 'spdk:(nil)'
00:04:29.128  EAL: request: mp_malloc_sync
00:04:29.128  EAL: No shared files mode enabled, IPC is disabled
00:04:29.128  EAL: Heap on socket 0 was shrunk by 18MB
00:04:29.128  EAL: Trying to obtain current memory policy.
00:04:29.128  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:29.128  EAL: Restoring previous memory policy: 4
00:04:29.128  EAL: Calling mem event callback 'spdk:(nil)'
00:04:29.128  EAL: request: mp_malloc_sync
00:04:29.128  EAL: No shared files mode enabled, IPC is disabled
00:04:29.128  EAL: Heap on socket 0 was expanded by 34MB
00:04:29.128  EAL: Calling mem event callback 'spdk:(nil)'
00:04:29.128  EAL: request: mp_malloc_sync
00:04:29.128  EAL: No shared files mode enabled, IPC is disabled
00:04:29.128  EAL: Heap on socket 0 was shrunk by 34MB
00:04:29.128  EAL: Trying to obtain current memory policy.
00:04:29.128  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:29.128  EAL: Restoring previous memory policy: 4
00:04:29.128  EAL: Calling mem event callback 'spdk:(nil)'
00:04:29.128  EAL: request: mp_malloc_sync
00:04:29.128  EAL: No shared files mode enabled, IPC is disabled
00:04:29.128  EAL: Heap on socket 0 was expanded by 66MB
00:04:29.128  EAL: Calling mem event callback 'spdk:(nil)'
00:04:29.128  EAL: request: mp_malloc_sync
00:04:29.128  EAL: No shared files mode enabled, IPC is disabled
00:04:29.128  EAL: Heap on socket 0 was shrunk by 66MB
00:04:29.128  EAL: Trying to obtain current memory policy.
00:04:29.128  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:29.128  EAL: Restoring previous memory policy: 4
00:04:29.128  EAL: Calling mem event callback 'spdk:(nil)'
00:04:29.128  EAL: request: mp_malloc_sync
00:04:29.128  EAL: No shared files mode enabled, IPC is disabled
00:04:29.128  EAL: Heap on socket 0 was expanded by 130MB
00:04:29.128  EAL: Calling mem event callback 'spdk:(nil)'
00:04:29.128  EAL: request: mp_malloc_sync
00:04:29.128  EAL: No shared files mode enabled, IPC is disabled
00:04:29.128  EAL: Heap on socket 0 was shrunk by 130MB
00:04:29.128  EAL: Trying to obtain current memory policy.
00:04:29.128  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:29.128  EAL: Restoring previous memory policy: 4
00:04:29.128  EAL: Calling mem event callback 'spdk:(nil)'
00:04:29.128  EAL: request: mp_malloc_sync
00:04:29.128  EAL: No shared files mode enabled, IPC is disabled
00:04:29.128  EAL: Heap on socket 0 was expanded by 258MB
00:04:29.128  EAL: Calling mem event callback 'spdk:(nil)'
00:04:29.387  EAL: request: mp_malloc_sync
00:04:29.387  EAL: No shared files mode enabled, IPC is disabled
00:04:29.387  EAL: Heap on socket 0 was shrunk by 258MB
00:04:29.387  EAL: Trying to obtain current memory policy.
00:04:29.388  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:29.388  EAL: Restoring previous memory policy: 4
00:04:29.388  EAL: Calling mem event callback 'spdk:(nil)'
00:04:29.388  EAL: request: mp_malloc_sync
00:04:29.388  EAL: No shared files mode enabled, IPC is disabled
00:04:29.388  EAL: Heap on socket 0 was expanded by 514MB
00:04:29.647  EAL: Calling mem event callback 'spdk:(nil)'
00:04:29.647  EAL: request: mp_malloc_sync
00:04:29.647  EAL: No shared files mode enabled, IPC is disabled
00:04:29.647  EAL: Heap on socket 0 was shrunk by 514MB
00:04:29.647  EAL: Trying to obtain current memory policy.
00:04:29.647  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:29.905  EAL: Restoring previous memory policy: 4
00:04:29.905  EAL: Calling mem event callback 'spdk:(nil)'
00:04:29.905  EAL: request: mp_malloc_sync
00:04:29.905  EAL: No shared files mode enabled, IPC is disabled
00:04:29.905  EAL: Heap on socket 0 was expanded by 1026MB
00:04:30.164  EAL: Calling mem event callback 'spdk:(nil)'
00:04:30.164  passed
00:04:30.164  
00:04:30.164  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:04:30.164                suites      1      1    n/a      0        0
00:04:30.164                 tests      2      2      2      0        0
00:04:30.164               asserts   5246   5246   5246      0      n/a
00:04:30.164  
00:04:30.164  Elapsed time =    1.183 seconds
00:04:30.164  EAL: request: mp_malloc_sync
00:04:30.164  EAL: No shared files mode enabled, IPC is disabled
00:04:30.164  EAL: Heap on socket 0 was shrunk by 1026MB
00:04:30.164  EAL: Calling mem event callback 'spdk:(nil)'
00:04:30.164  EAL: request: mp_malloc_sync
00:04:30.164  EAL: No shared files mode enabled, IPC is disabled
00:04:30.164  EAL: Heap on socket 0 was shrunk by 2MB
00:04:30.164  EAL: No shared files mode enabled, IPC is disabled
00:04:30.164  EAL: No shared files mode enabled, IPC is disabled
00:04:30.164  EAL: No shared files mode enabled, IPC is disabled
00:04:30.164  
00:04:30.164  real	0m1.388s
00:04:30.164  user	0m0.757s
00:04:30.164  sys	0m0.487s
00:04:30.164   06:14:47	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:30.164  ************************************
00:04:30.164  END TEST env_vtophys
00:04:30.164   06:14:47	-- common/autotest_common.sh@10 -- # set +x
00:04:30.164  ************************************
00:04:30.164   06:14:47	-- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut
00:04:30.164   06:14:47	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:30.164   06:14:47	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:30.164   06:14:47	-- common/autotest_common.sh@10 -- # set +x
00:04:30.164  ************************************
00:04:30.164  START TEST env_pci
00:04:30.164  ************************************
00:04:30.164   06:14:47	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut
00:04:30.423  
00:04:30.423  
00:04:30.423       CUnit - A unit testing framework for C - Version 2.1-3
00:04:30.423       http://cunit.sourceforge.net/
00:04:30.423  
00:04:30.423  
00:04:30.423  Suite: pci
00:04:30.423    Test: pci_hook ...[2024-12-16 06:14:47.149262] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 55373 has claimed it
00:04:30.423  passed
00:04:30.423  
00:04:30.423  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:04:30.423                suites      1      1    n/a      0        0
00:04:30.423                 tests      1      1      1      0        0
00:04:30.423               asserts     25     25     25      0      n/a
00:04:30.423  
00:04:30.423  Elapsed time =    0.002 seconds
00:04:30.423  EAL: Cannot find device (10000:00:01.0)
00:04:30.423  EAL: Failed to attach device on primary process
00:04:30.423  
00:04:30.423  real	0m0.023s
00:04:30.423  user	0m0.011s
00:04:30.423  sys	0m0.012s
00:04:30.423   06:14:47	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:30.423   06:14:47	-- common/autotest_common.sh@10 -- # set +x
00:04:30.423  ************************************
00:04:30.423  END TEST env_pci
00:04:30.423  ************************************
00:04:30.423   06:14:47	-- env/env.sh@14 -- # argv='-c 0x1 '
00:04:30.423    06:14:47	-- env/env.sh@15 -- # uname
00:04:30.423   06:14:47	-- env/env.sh@15 -- # '[' Linux = Linux ']'
00:04:30.423   06:14:47	-- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000
00:04:30.423   06:14:47	-- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:04:30.423   06:14:47	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:04:30.423   06:14:47	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:30.423   06:14:47	-- common/autotest_common.sh@10 -- # set +x
00:04:30.423  ************************************
00:04:30.423  START TEST env_dpdk_post_init
00:04:30.423  ************************************
00:04:30.423   06:14:47	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:04:30.423  EAL: Detected CPU lcores: 10
00:04:30.423  EAL: Detected NUMA nodes: 1
00:04:30.423  EAL: Detected shared linkage of DPDK
00:04:30.423  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:04:30.423  EAL: Selected IOVA mode 'PA'
00:04:30.423  TELEMETRY: No legacy callbacks, legacy socket not created
00:04:30.423  EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1)
00:04:30.423  EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1)
00:04:30.423  Starting DPDK initialization...
00:04:30.423  Starting SPDK post initialization...
00:04:30.423  SPDK NVMe probe
00:04:30.423  Attaching to 0000:00:06.0
00:04:30.423  Attaching to 0000:00:07.0
00:04:30.423  Attached to 0000:00:06.0
00:04:30.423  Attached to 0000:00:07.0
00:04:30.423  Cleaning up...
00:04:30.423  
00:04:30.423  real	0m0.172s
00:04:30.423  user	0m0.041s
00:04:30.423  sys	0m0.031s
00:04:30.423   06:14:47	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:30.423   06:14:47	-- common/autotest_common.sh@10 -- # set +x
00:04:30.423  ************************************
00:04:30.423  END TEST env_dpdk_post_init
00:04:30.423  ************************************
00:04:30.682    06:14:47	-- env/env.sh@26 -- # uname
00:04:30.682   06:14:47	-- env/env.sh@26 -- # '[' Linux = Linux ']'
00:04:30.682   06:14:47	-- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks
00:04:30.682   06:14:47	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:30.682   06:14:47	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:30.682   06:14:47	-- common/autotest_common.sh@10 -- # set +x
00:04:30.682  ************************************
00:04:30.682  START TEST env_mem_callbacks
00:04:30.682  ************************************
00:04:30.682   06:14:47	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks
00:04:30.682  EAL: Detected CPU lcores: 10
00:04:30.682  EAL: Detected NUMA nodes: 1
00:04:30.682  EAL: Detected shared linkage of DPDK
00:04:30.682  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:04:30.682  EAL: Selected IOVA mode 'PA'
00:04:30.682  TELEMETRY: No legacy callbacks, legacy socket not created
00:04:30.682  
00:04:30.682  
00:04:30.682       CUnit - A unit testing framework for C - Version 2.1-3
00:04:30.682       http://cunit.sourceforge.net/
00:04:30.682  
00:04:30.682  
00:04:30.682  Suite: memory
00:04:30.682    Test: test ...
00:04:30.682  register 0x200000200000 2097152
00:04:30.682  malloc 3145728
00:04:30.682  register 0x200000400000 4194304
00:04:30.682  buf 0x200000500000 len 3145728 PASSED
00:04:30.682  malloc 64
00:04:30.682  buf 0x2000004fff40 len 64 PASSED
00:04:30.682  malloc 4194304
00:04:30.682  register 0x200000800000 6291456
00:04:30.682  buf 0x200000a00000 len 4194304 PASSED
00:04:30.682  free 0x200000500000 3145728
00:04:30.682  free 0x2000004fff40 64
00:04:30.682  unregister 0x200000400000 4194304 PASSED
00:04:30.682  free 0x200000a00000 4194304
00:04:30.682  unregister 0x200000800000 6291456 PASSED
00:04:30.682  malloc 8388608
00:04:30.682  register 0x200000400000 10485760
00:04:30.682  buf 0x200000600000 len 8388608 PASSED
00:04:30.682  free 0x200000600000 8388608
00:04:30.682  unregister 0x200000400000 10485760 PASSED
00:04:30.682  passed
00:04:30.682  
00:04:30.682  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:04:30.682                suites      1      1    n/a      0        0
00:04:30.682                 tests      1      1      1      0        0
00:04:30.682               asserts     15     15     15      0      n/a
00:04:30.682  
00:04:30.682  Elapsed time =    0.008 seconds
00:04:30.682  
00:04:30.682  real	0m0.139s
00:04:30.682  user	0m0.014s
00:04:30.682  sys	0m0.024s
00:04:30.682   06:14:47	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:30.682   06:14:47	-- common/autotest_common.sh@10 -- # set +x
00:04:30.682  ************************************
00:04:30.682  END TEST env_mem_callbacks
00:04:30.682  ************************************
00:04:30.682  
00:04:30.682  real	0m2.405s
00:04:30.682  user	0m1.238s
00:04:30.682  sys	0m0.800s
00:04:30.682   06:14:47	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:30.682   06:14:47	-- common/autotest_common.sh@10 -- # set +x
00:04:30.682  ************************************
00:04:30.682  END TEST env
00:04:30.682  ************************************
00:04:30.940   06:14:47	-- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh
00:04:30.940   06:14:47	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:30.940   06:14:47	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:30.940   06:14:47	-- common/autotest_common.sh@10 -- # set +x
00:04:30.940  ************************************
00:04:30.940  START TEST rpc
00:04:30.940  ************************************
00:04:30.940   06:14:47	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh
00:04:30.940  * Looking for test storage...
00:04:30.940  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc
00:04:30.940    06:14:47	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:04:30.940     06:14:47	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:04:30.940     06:14:47	-- common/autotest_common.sh@1690 -- # lcov --version
00:04:30.940    06:14:47	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:04:30.940    06:14:47	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:04:30.940    06:14:47	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:04:30.940    06:14:47	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:04:30.940    06:14:47	-- scripts/common.sh@335 -- # IFS=.-:
00:04:30.940    06:14:47	-- scripts/common.sh@335 -- # read -ra ver1
00:04:30.940    06:14:47	-- scripts/common.sh@336 -- # IFS=.-:
00:04:30.940    06:14:47	-- scripts/common.sh@336 -- # read -ra ver2
00:04:30.940    06:14:47	-- scripts/common.sh@337 -- # local 'op=<'
00:04:30.940    06:14:47	-- scripts/common.sh@339 -- # ver1_l=2
00:04:30.940    06:14:47	-- scripts/common.sh@340 -- # ver2_l=1
00:04:30.940    06:14:47	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:04:30.940    06:14:47	-- scripts/common.sh@343 -- # case "$op" in
00:04:30.940    06:14:47	-- scripts/common.sh@344 -- # : 1
00:04:30.941    06:14:47	-- scripts/common.sh@363 -- # (( v = 0 ))
00:04:30.941    06:14:47	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:30.941     06:14:47	-- scripts/common.sh@364 -- # decimal 1
00:04:30.941     06:14:47	-- scripts/common.sh@352 -- # local d=1
00:04:30.941     06:14:47	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:30.941     06:14:47	-- scripts/common.sh@354 -- # echo 1
00:04:30.941    06:14:47	-- scripts/common.sh@364 -- # ver1[v]=1
00:04:30.941     06:14:47	-- scripts/common.sh@365 -- # decimal 2
00:04:30.941     06:14:47	-- scripts/common.sh@352 -- # local d=2
00:04:30.941     06:14:47	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:30.941     06:14:47	-- scripts/common.sh@354 -- # echo 2
00:04:30.941    06:14:47	-- scripts/common.sh@365 -- # ver2[v]=2
00:04:30.941    06:14:47	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:04:30.941    06:14:47	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:04:30.941    06:14:47	-- scripts/common.sh@367 -- # return 0
00:04:30.941    06:14:47	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:30.941    06:14:47	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:04:30.941  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:30.941  		--rc genhtml_branch_coverage=1
00:04:30.941  		--rc genhtml_function_coverage=1
00:04:30.941  		--rc genhtml_legend=1
00:04:30.941  		--rc geninfo_all_blocks=1
00:04:30.941  		--rc geninfo_unexecuted_blocks=1
00:04:30.941  		
00:04:30.941  		'
00:04:30.941    06:14:47	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:04:30.941  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:30.941  		--rc genhtml_branch_coverage=1
00:04:30.941  		--rc genhtml_function_coverage=1
00:04:30.941  		--rc genhtml_legend=1
00:04:30.941  		--rc geninfo_all_blocks=1
00:04:30.941  		--rc geninfo_unexecuted_blocks=1
00:04:30.941  		
00:04:30.941  		'
00:04:30.941    06:14:47	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:04:30.941  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:30.941  		--rc genhtml_branch_coverage=1
00:04:30.941  		--rc genhtml_function_coverage=1
00:04:30.941  		--rc genhtml_legend=1
00:04:30.941  		--rc geninfo_all_blocks=1
00:04:30.941  		--rc geninfo_unexecuted_blocks=1
00:04:30.941  		
00:04:30.941  		'
00:04:30.941    06:14:47	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:04:30.941  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:30.941  		--rc genhtml_branch_coverage=1
00:04:30.941  		--rc genhtml_function_coverage=1
00:04:30.941  		--rc genhtml_legend=1
00:04:30.941  		--rc geninfo_all_blocks=1
00:04:30.941  		--rc geninfo_unexecuted_blocks=1
00:04:30.941  		
00:04:30.941  		'
00:04:30.941   06:14:47	-- rpc/rpc.sh@65 -- # spdk_pid=55495
00:04:30.941   06:14:47	-- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev
00:04:30.941   06:14:47	-- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:04:30.941   06:14:47	-- rpc/rpc.sh@67 -- # waitforlisten 55495
00:04:30.941   06:14:47	-- common/autotest_common.sh@829 -- # '[' -z 55495 ']'
00:04:30.941   06:14:47	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:30.941   06:14:47	-- common/autotest_common.sh@834 -- # local max_retries=100
00:04:30.941   06:14:47	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:30.941  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:30.941   06:14:47	-- common/autotest_common.sh@838 -- # xtrace_disable
00:04:30.941   06:14:47	-- common/autotest_common.sh@10 -- # set +x
00:04:31.199  [2024-12-16 06:14:47.919613] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:04:31.199  [2024-12-16 06:14:47.919715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55495 ]
00:04:31.199  [2024-12-16 06:14:48.059544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:31.199  [2024-12-16 06:14:48.155615] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:04:31.199  [2024-12-16 06:14:48.155781] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified.
00:04:31.199  [2024-12-16 06:14:48.155798] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 55495' to capture a snapshot of events at runtime.
00:04:31.199  [2024-12-16 06:14:48.155810] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid55495 for offline analysis/debug.
00:04:31.199  [2024-12-16 06:14:48.155848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:04:32.134   06:14:48	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:04:32.134   06:14:48	-- common/autotest_common.sh@862 -- # return 0
00:04:32.134   06:14:48	-- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc
00:04:32.134   06:14:48	-- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc
00:04:32.134   06:14:48	-- rpc/rpc.sh@72 -- # rpc=rpc_cmd
00:04:32.134   06:14:48	-- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity
00:04:32.134   06:14:48	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:32.134   06:14:48	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:32.134   06:14:48	-- common/autotest_common.sh@10 -- # set +x
00:04:32.134  ************************************
00:04:32.134  START TEST rpc_integrity
00:04:32.134  ************************************
00:04:32.135   06:14:48	-- common/autotest_common.sh@1114 -- # rpc_integrity
00:04:32.135    06:14:48	-- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:04:32.135    06:14:48	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:32.135    06:14:48	-- common/autotest_common.sh@10 -- # set +x
00:04:32.135    06:14:48	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:32.135   06:14:48	-- rpc/rpc.sh@12 -- # bdevs='[]'
00:04:32.135    06:14:48	-- rpc/rpc.sh@13 -- # jq length
00:04:32.135   06:14:49	-- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:04:32.135    06:14:49	-- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:04:32.135    06:14:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:32.135    06:14:49	-- common/autotest_common.sh@10 -- # set +x
00:04:32.135    06:14:49	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:32.135   06:14:49	-- rpc/rpc.sh@15 -- # malloc=Malloc0
00:04:32.135    06:14:49	-- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:04:32.135    06:14:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:32.135    06:14:49	-- common/autotest_common.sh@10 -- # set +x
00:04:32.135    06:14:49	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:32.135   06:14:49	-- rpc/rpc.sh@16 -- # bdevs='[
00:04:32.135  {
00:04:32.135  "aliases": [
00:04:32.135  "f3d5295c-24af-4dc4-ac0a-1d508615af8c"
00:04:32.135  ],
00:04:32.135  "assigned_rate_limits": {
00:04:32.135  "r_mbytes_per_sec": 0,
00:04:32.135  "rw_ios_per_sec": 0,
00:04:32.135  "rw_mbytes_per_sec": 0,
00:04:32.135  "w_mbytes_per_sec": 0
00:04:32.135  },
00:04:32.135  "block_size": 512,
00:04:32.135  "claimed": false,
00:04:32.135  "driver_specific": {},
00:04:32.135  "memory_domains": [
00:04:32.135  {
00:04:32.135  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:32.135  "dma_device_type": 2
00:04:32.135  }
00:04:32.135  ],
00:04:32.135  "name": "Malloc0",
00:04:32.135  "num_blocks": 16384,
00:04:32.135  "product_name": "Malloc disk",
00:04:32.135  "supported_io_types": {
00:04:32.135  "abort": true,
00:04:32.135  "compare": false,
00:04:32.135  "compare_and_write": false,
00:04:32.135  "flush": true,
00:04:32.135  "nvme_admin": false,
00:04:32.135  "nvme_io": false,
00:04:32.135  "read": true,
00:04:32.135  "reset": true,
00:04:32.135  "unmap": true,
00:04:32.135  "write": true,
00:04:32.135  "write_zeroes": true
00:04:32.135  },
00:04:32.135  "uuid": "f3d5295c-24af-4dc4-ac0a-1d508615af8c",
00:04:32.135  "zoned": false
00:04:32.135  }
00:04:32.135  ]'
00:04:32.135    06:14:49	-- rpc/rpc.sh@17 -- # jq length
00:04:32.135   06:14:49	-- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:04:32.135   06:14:49	-- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0
00:04:32.135   06:14:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:32.135   06:14:49	-- common/autotest_common.sh@10 -- # set +x
00:04:32.135  [2024-12-16 06:14:49.100546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0
00:04:32.135  [2024-12-16 06:14:49.100602] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:04:32.135  [2024-12-16 06:14:49.100622] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17b5880
00:04:32.135  [2024-12-16 06:14:49.100632] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:04:32.135  [2024-12-16 06:14:49.102036] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:04:32.135  [2024-12-16 06:14:49.102066] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:04:32.135  Passthru0
00:04:32.135   06:14:49	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:32.135    06:14:49	-- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:04:32.135    06:14:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:32.135    06:14:49	-- common/autotest_common.sh@10 -- # set +x
00:04:32.394    06:14:49	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:32.394   06:14:49	-- rpc/rpc.sh@20 -- # bdevs='[
00:04:32.394  {
00:04:32.394  "aliases": [
00:04:32.394  "f3d5295c-24af-4dc4-ac0a-1d508615af8c"
00:04:32.394  ],
00:04:32.394  "assigned_rate_limits": {
00:04:32.394  "r_mbytes_per_sec": 0,
00:04:32.394  "rw_ios_per_sec": 0,
00:04:32.394  "rw_mbytes_per_sec": 0,
00:04:32.394  "w_mbytes_per_sec": 0
00:04:32.394  },
00:04:32.394  "block_size": 512,
00:04:32.394  "claim_type": "exclusive_write",
00:04:32.394  "claimed": true,
00:04:32.394  "driver_specific": {},
00:04:32.394  "memory_domains": [
00:04:32.394  {
00:04:32.394  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:32.394  "dma_device_type": 2
00:04:32.394  }
00:04:32.394  ],
00:04:32.394  "name": "Malloc0",
00:04:32.394  "num_blocks": 16384,
00:04:32.394  "product_name": "Malloc disk",
00:04:32.394  "supported_io_types": {
00:04:32.394  "abort": true,
00:04:32.394  "compare": false,
00:04:32.394  "compare_and_write": false,
00:04:32.394  "flush": true,
00:04:32.394  "nvme_admin": false,
00:04:32.394  "nvme_io": false,
00:04:32.394  "read": true,
00:04:32.394  "reset": true,
00:04:32.394  "unmap": true,
00:04:32.394  "write": true,
00:04:32.394  "write_zeroes": true
00:04:32.394  },
00:04:32.394  "uuid": "f3d5295c-24af-4dc4-ac0a-1d508615af8c",
00:04:32.394  "zoned": false
00:04:32.394  },
00:04:32.394  {
00:04:32.394  "aliases": [
00:04:32.394  "ecbdaf89-11ac-590e-8e06-eb2634d23427"
00:04:32.394  ],
00:04:32.394  "assigned_rate_limits": {
00:04:32.394  "r_mbytes_per_sec": 0,
00:04:32.394  "rw_ios_per_sec": 0,
00:04:32.394  "rw_mbytes_per_sec": 0,
00:04:32.394  "w_mbytes_per_sec": 0
00:04:32.394  },
00:04:32.394  "block_size": 512,
00:04:32.394  "claimed": false,
00:04:32.394  "driver_specific": {
00:04:32.394  "passthru": {
00:04:32.394  "base_bdev_name": "Malloc0",
00:04:32.394  "name": "Passthru0"
00:04:32.394  }
00:04:32.394  },
00:04:32.394  "memory_domains": [
00:04:32.394  {
00:04:32.394  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:32.394  "dma_device_type": 2
00:04:32.394  }
00:04:32.394  ],
00:04:32.394  "name": "Passthru0",
00:04:32.394  "num_blocks": 16384,
00:04:32.394  "product_name": "passthru",
00:04:32.394  "supported_io_types": {
00:04:32.394  "abort": true,
00:04:32.394  "compare": false,
00:04:32.394  "compare_and_write": false,
00:04:32.394  "flush": true,
00:04:32.394  "nvme_admin": false,
00:04:32.394  "nvme_io": false,
00:04:32.394  "read": true,
00:04:32.394  "reset": true,
00:04:32.394  "unmap": true,
00:04:32.394  "write": true,
00:04:32.394  "write_zeroes": true
00:04:32.394  },
00:04:32.394  "uuid": "ecbdaf89-11ac-590e-8e06-eb2634d23427",
00:04:32.394  "zoned": false
00:04:32.394  }
00:04:32.394  ]'
00:04:32.394    06:14:49	-- rpc/rpc.sh@21 -- # jq length
00:04:32.394   06:14:49	-- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:04:32.394   06:14:49	-- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:04:32.394   06:14:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:32.394   06:14:49	-- common/autotest_common.sh@10 -- # set +x
00:04:32.394   06:14:49	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:32.394   06:14:49	-- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0
00:04:32.394   06:14:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:32.394   06:14:49	-- common/autotest_common.sh@10 -- # set +x
00:04:32.394   06:14:49	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:32.394    06:14:49	-- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:04:32.394    06:14:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:32.394    06:14:49	-- common/autotest_common.sh@10 -- # set +x
00:04:32.394    06:14:49	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:32.394   06:14:49	-- rpc/rpc.sh@25 -- # bdevs='[]'
00:04:32.394    06:14:49	-- rpc/rpc.sh@26 -- # jq length
00:04:32.394   06:14:49	-- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:04:32.394  
00:04:32.394  real	0m0.325s
00:04:32.394  user	0m0.215s
00:04:32.394  sys	0m0.035s
00:04:32.394   06:14:49	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:32.394  ************************************
00:04:32.394  END TEST rpc_integrity
00:04:32.394   06:14:49	-- common/autotest_common.sh@10 -- # set +x
00:04:32.394  ************************************
00:04:32.394   06:14:49	-- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins
00:04:32.394   06:14:49	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:32.394   06:14:49	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:32.394   06:14:49	-- common/autotest_common.sh@10 -- # set +x
00:04:32.394  ************************************
00:04:32.394  START TEST rpc_plugins
00:04:32.394  ************************************
00:04:32.394   06:14:49	-- common/autotest_common.sh@1114 -- # rpc_plugins
00:04:32.394    06:14:49	-- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc
00:04:32.394    06:14:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:32.394    06:14:49	-- common/autotest_common.sh@10 -- # set +x
00:04:32.394    06:14:49	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:32.394   06:14:49	-- rpc/rpc.sh@30 -- # malloc=Malloc1
00:04:32.394    06:14:49	-- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs
00:04:32.394    06:14:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:32.394    06:14:49	-- common/autotest_common.sh@10 -- # set +x
00:04:32.394    06:14:49	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:32.394   06:14:49	-- rpc/rpc.sh@31 -- # bdevs='[
00:04:32.394  {
00:04:32.394  "aliases": [
00:04:32.394  "95422799-adf3-4446-82a6-7bcdcfb2398b"
00:04:32.394  ],
00:04:32.394  "assigned_rate_limits": {
00:04:32.394  "r_mbytes_per_sec": 0,
00:04:32.394  "rw_ios_per_sec": 0,
00:04:32.394  "rw_mbytes_per_sec": 0,
00:04:32.394  "w_mbytes_per_sec": 0
00:04:32.394  },
00:04:32.394  "block_size": 4096,
00:04:32.394  "claimed": false,
00:04:32.394  "driver_specific": {},
00:04:32.394  "memory_domains": [
00:04:32.394  {
00:04:32.394  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:32.394  "dma_device_type": 2
00:04:32.394  }
00:04:32.394  ],
00:04:32.394  "name": "Malloc1",
00:04:32.394  "num_blocks": 256,
00:04:32.394  "product_name": "Malloc disk",
00:04:32.394  "supported_io_types": {
00:04:32.394  "abort": true,
00:04:32.394  "compare": false,
00:04:32.394  "compare_and_write": false,
00:04:32.394  "flush": true,
00:04:32.394  "nvme_admin": false,
00:04:32.394  "nvme_io": false,
00:04:32.394  "read": true,
00:04:32.394  "reset": true,
00:04:32.394  "unmap": true,
00:04:32.394  "write": true,
00:04:32.394  "write_zeroes": true
00:04:32.394  },
00:04:32.394  "uuid": "95422799-adf3-4446-82a6-7bcdcfb2398b",
00:04:32.394  "zoned": false
00:04:32.394  }
00:04:32.394  ]'
00:04:32.394    06:14:49	-- rpc/rpc.sh@32 -- # jq length
00:04:32.653   06:14:49	-- rpc/rpc.sh@32 -- # '[' 1 == 1 ']'
00:04:32.653   06:14:49	-- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1
00:04:32.653   06:14:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:32.653   06:14:49	-- common/autotest_common.sh@10 -- # set +x
00:04:32.653   06:14:49	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:32.653    06:14:49	-- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs
00:04:32.653    06:14:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:32.653    06:14:49	-- common/autotest_common.sh@10 -- # set +x
00:04:32.653    06:14:49	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:32.653   06:14:49	-- rpc/rpc.sh@35 -- # bdevs='[]'
00:04:32.653    06:14:49	-- rpc/rpc.sh@36 -- # jq length
00:04:32.653   06:14:49	-- rpc/rpc.sh@36 -- # '[' 0 == 0 ']'
00:04:32.653  
00:04:32.653  real	0m0.161s
00:04:32.653  user	0m0.106s
00:04:32.653  sys	0m0.018s
00:04:32.653   06:14:49	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:32.653  ************************************
00:04:32.653  END TEST rpc_plugins
00:04:32.653  ************************************
00:04:32.653   06:14:49	-- common/autotest_common.sh@10 -- # set +x
00:04:32.653   06:14:49	-- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test
00:04:32.653   06:14:49	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:32.653   06:14:49	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:32.653   06:14:49	-- common/autotest_common.sh@10 -- # set +x
00:04:32.653  ************************************
00:04:32.653  START TEST rpc_trace_cmd_test
00:04:32.653  ************************************
00:04:32.653   06:14:49	-- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test
00:04:32.653   06:14:49	-- rpc/rpc.sh@40 -- # local info
00:04:32.653    06:14:49	-- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info
00:04:32.653    06:14:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:32.653    06:14:49	-- common/autotest_common.sh@10 -- # set +x
00:04:32.653    06:14:49	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:32.653   06:14:49	-- rpc/rpc.sh@42 -- # info='{
00:04:32.653  "bdev": {
00:04:32.653  "mask": "0x8",
00:04:32.653  "tpoint_mask": "0xffffffffffffffff"
00:04:32.653  },
00:04:32.653  "bdev_nvme": {
00:04:32.653  "mask": "0x4000",
00:04:32.653  "tpoint_mask": "0x0"
00:04:32.653  },
00:04:32.653  "blobfs": {
00:04:32.653  "mask": "0x80",
00:04:32.653  "tpoint_mask": "0x0"
00:04:32.653  },
00:04:32.653  "dsa": {
00:04:32.653  "mask": "0x200",
00:04:32.653  "tpoint_mask": "0x0"
00:04:32.653  },
00:04:32.653  "ftl": {
00:04:32.653  "mask": "0x40",
00:04:32.653  "tpoint_mask": "0x0"
00:04:32.653  },
00:04:32.653  "iaa": {
00:04:32.653  "mask": "0x1000",
00:04:32.653  "tpoint_mask": "0x0"
00:04:32.653  },
00:04:32.653  "iscsi_conn": {
00:04:32.653  "mask": "0x2",
00:04:32.653  "tpoint_mask": "0x0"
00:04:32.653  },
00:04:32.653  "nvme_pcie": {
00:04:32.653  "mask": "0x800",
00:04:32.653  "tpoint_mask": "0x0"
00:04:32.653  },
00:04:32.653  "nvme_tcp": {
00:04:32.653  "mask": "0x2000",
00:04:32.653  "tpoint_mask": "0x0"
00:04:32.653  },
00:04:32.653  "nvmf_rdma": {
00:04:32.653  "mask": "0x10",
00:04:32.653  "tpoint_mask": "0x0"
00:04:32.653  },
00:04:32.653  "nvmf_tcp": {
00:04:32.653  "mask": "0x20",
00:04:32.653  "tpoint_mask": "0x0"
00:04:32.653  },
00:04:32.653  "scsi": {
00:04:32.653  "mask": "0x4",
00:04:32.653  "tpoint_mask": "0x0"
00:04:32.653  },
00:04:32.653  "thread": {
00:04:32.653  "mask": "0x400",
00:04:32.653  "tpoint_mask": "0x0"
00:04:32.653  },
00:04:32.653  "tpoint_group_mask": "0x8",
00:04:32.653  "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid55495"
00:04:32.653  }'
00:04:32.653    06:14:49	-- rpc/rpc.sh@43 -- # jq length
00:04:32.653   06:14:49	-- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']'
00:04:32.653    06:14:49	-- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")'
00:04:32.912   06:14:49	-- rpc/rpc.sh@44 -- # '[' true = true ']'
00:04:32.912    06:14:49	-- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")'
00:04:32.912   06:14:49	-- rpc/rpc.sh@45 -- # '[' true = true ']'
00:04:32.912    06:14:49	-- rpc/rpc.sh@46 -- # jq 'has("bdev")'
00:04:32.912   06:14:49	-- rpc/rpc.sh@46 -- # '[' true = true ']'
00:04:32.912    06:14:49	-- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask
00:04:32.912   06:14:49	-- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']'
00:04:32.912  
00:04:32.912  real	0m0.278s
00:04:32.912  user	0m0.246s
00:04:32.912  sys	0m0.024s
00:04:32.912   06:14:49	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:32.912  ************************************
00:04:32.912  END TEST rpc_trace_cmd_test
00:04:32.912   06:14:49	-- common/autotest_common.sh@10 -- # set +x
00:04:32.912  ************************************
00:04:32.912   06:14:49	-- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]]
00:04:32.912   06:14:49	-- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc
00:04:32.912   06:14:49	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:32.912   06:14:49	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:32.912   06:14:49	-- common/autotest_common.sh@10 -- # set +x
00:04:32.912  ************************************
00:04:32.912  START TEST go_rpc
00:04:32.912  ************************************
00:04:32.912   06:14:49	-- common/autotest_common.sh@1114 -- # go_rpc
00:04:32.912    06:14:49	-- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc
00:04:32.912   06:14:49	-- rpc/rpc.sh@51 -- # bdevs='[]'
00:04:32.912    06:14:49	-- rpc/rpc.sh@52 -- # jq length
00:04:33.171   06:14:49	-- rpc/rpc.sh@52 -- # '[' 0 == 0 ']'
00:04:33.171    06:14:49	-- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512
00:04:33.171    06:14:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:33.171    06:14:49	-- common/autotest_common.sh@10 -- # set +x
00:04:33.171    06:14:49	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:33.171   06:14:49	-- rpc/rpc.sh@54 -- # malloc=Malloc2
00:04:33.171    06:14:49	-- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc
00:04:33.171   06:14:49	-- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["e9ba821c-7df3-4161-b661-1b817373896f"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"e9ba821c-7df3-4161-b661-1b817373896f","zoned":false}]'
00:04:33.171    06:14:49	-- rpc/rpc.sh@57 -- # jq length
00:04:33.171   06:14:50	-- rpc/rpc.sh@57 -- # '[' 1 == 1 ']'
00:04:33.171   06:14:50	-- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2
00:04:33.171   06:14:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:33.171   06:14:50	-- common/autotest_common.sh@10 -- # set +x
00:04:33.171   06:14:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:33.171    06:14:50	-- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc
00:04:33.171   06:14:50	-- rpc/rpc.sh@60 -- # bdevs='[]'
00:04:33.171    06:14:50	-- rpc/rpc.sh@61 -- # jq length
00:04:33.171   06:14:50	-- rpc/rpc.sh@61 -- # '[' 0 == 0 ']'
00:04:33.171  
00:04:33.171  real	0m0.227s
00:04:33.171  user	0m0.155s
00:04:33.171  sys	0m0.034s
00:04:33.171   06:14:50	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:33.171   06:14:50	-- common/autotest_common.sh@10 -- # set +x
00:04:33.171  ************************************
00:04:33.171  END TEST go_rpc
00:04:33.171  ************************************
00:04:33.171   06:14:50	-- rpc/rpc.sh@80 -- # rpc=rpc_cmd
00:04:33.171   06:14:50	-- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity
00:04:33.171   06:14:50	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:33.171   06:14:50	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:33.171   06:14:50	-- common/autotest_common.sh@10 -- # set +x
00:04:33.171  ************************************
00:04:33.171  START TEST rpc_daemon_integrity
00:04:33.171  ************************************
00:04:33.171   06:14:50	-- common/autotest_common.sh@1114 -- # rpc_integrity
00:04:33.171    06:14:50	-- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:04:33.171    06:14:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:33.171    06:14:50	-- common/autotest_common.sh@10 -- # set +x
00:04:33.430    06:14:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:33.430   06:14:50	-- rpc/rpc.sh@12 -- # bdevs='[]'
00:04:33.430    06:14:50	-- rpc/rpc.sh@13 -- # jq length
00:04:33.430   06:14:50	-- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:04:33.430    06:14:50	-- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:04:33.430    06:14:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:33.430    06:14:50	-- common/autotest_common.sh@10 -- # set +x
00:04:33.430    06:14:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:33.430   06:14:50	-- rpc/rpc.sh@15 -- # malloc=Malloc3
00:04:33.430    06:14:50	-- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:04:33.430    06:14:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:33.430    06:14:50	-- common/autotest_common.sh@10 -- # set +x
00:04:33.430    06:14:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:33.430   06:14:50	-- rpc/rpc.sh@16 -- # bdevs='[
00:04:33.430  {
00:04:33.430  "aliases": [
00:04:33.430  "0b785b22-fcf8-4f14-ba38-412ee8db9f5e"
00:04:33.430  ],
00:04:33.430  "assigned_rate_limits": {
00:04:33.430  "r_mbytes_per_sec": 0,
00:04:33.430  "rw_ios_per_sec": 0,
00:04:33.430  "rw_mbytes_per_sec": 0,
00:04:33.430  "w_mbytes_per_sec": 0
00:04:33.430  },
00:04:33.430  "block_size": 512,
00:04:33.430  "claimed": false,
00:04:33.430  "driver_specific": {},
00:04:33.430  "memory_domains": [
00:04:33.430  {
00:04:33.430  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:33.430  "dma_device_type": 2
00:04:33.430  }
00:04:33.430  ],
00:04:33.430  "name": "Malloc3",
00:04:33.430  "num_blocks": 16384,
00:04:33.430  "product_name": "Malloc disk",
00:04:33.430  "supported_io_types": {
00:04:33.430  "abort": true,
00:04:33.430  "compare": false,
00:04:33.430  "compare_and_write": false,
00:04:33.430  "flush": true,
00:04:33.430  "nvme_admin": false,
00:04:33.430  "nvme_io": false,
00:04:33.430  "read": true,
00:04:33.430  "reset": true,
00:04:33.430  "unmap": true,
00:04:33.430  "write": true,
00:04:33.430  "write_zeroes": true
00:04:33.430  },
00:04:33.430  "uuid": "0b785b22-fcf8-4f14-ba38-412ee8db9f5e",
00:04:33.430  "zoned": false
00:04:33.430  }
00:04:33.430  ]'
00:04:33.430    06:14:50	-- rpc/rpc.sh@17 -- # jq length
00:04:33.430   06:14:50	-- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:04:33.430   06:14:50	-- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0
00:04:33.430   06:14:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:33.430   06:14:50	-- common/autotest_common.sh@10 -- # set +x
00:04:33.430  [2024-12-16 06:14:50.289083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:04:33.430  [2024-12-16 06:14:50.289138] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:04:33.430  [2024-12-16 06:14:50.289153] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19a6680
00:04:33.430  [2024-12-16 06:14:50.289161] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:04:33.430  [2024-12-16 06:14:50.290431] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:04:33.430  [2024-12-16 06:14:50.290477] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:04:33.430  Passthru0
00:04:33.430   06:14:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:33.430    06:14:50	-- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:04:33.430    06:14:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:33.430    06:14:50	-- common/autotest_common.sh@10 -- # set +x
00:04:33.430    06:14:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:33.430   06:14:50	-- rpc/rpc.sh@20 -- # bdevs='[
00:04:33.430  {
00:04:33.430  "aliases": [
00:04:33.430  "0b785b22-fcf8-4f14-ba38-412ee8db9f5e"
00:04:33.430  ],
00:04:33.430  "assigned_rate_limits": {
00:04:33.430  "r_mbytes_per_sec": 0,
00:04:33.430  "rw_ios_per_sec": 0,
00:04:33.430  "rw_mbytes_per_sec": 0,
00:04:33.430  "w_mbytes_per_sec": 0
00:04:33.430  },
00:04:33.430  "block_size": 512,
00:04:33.430  "claim_type": "exclusive_write",
00:04:33.430  "claimed": true,
00:04:33.430  "driver_specific": {},
00:04:33.430  "memory_domains": [
00:04:33.430  {
00:04:33.430  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:33.430  "dma_device_type": 2
00:04:33.430  }
00:04:33.430  ],
00:04:33.430  "name": "Malloc3",
00:04:33.430  "num_blocks": 16384,
00:04:33.430  "product_name": "Malloc disk",
00:04:33.430  "supported_io_types": {
00:04:33.430  "abort": true,
00:04:33.430  "compare": false,
00:04:33.430  "compare_and_write": false,
00:04:33.430  "flush": true,
00:04:33.430  "nvme_admin": false,
00:04:33.430  "nvme_io": false,
00:04:33.430  "read": true,
00:04:33.430  "reset": true,
00:04:33.430  "unmap": true,
00:04:33.430  "write": true,
00:04:33.430  "write_zeroes": true
00:04:33.430  },
00:04:33.430  "uuid": "0b785b22-fcf8-4f14-ba38-412ee8db9f5e",
00:04:33.430  "zoned": false
00:04:33.430  },
00:04:33.430  {
00:04:33.430  "aliases": [
00:04:33.430  "46faa14f-ae44-587a-b046-46a83c978c4d"
00:04:33.430  ],
00:04:33.430  "assigned_rate_limits": {
00:04:33.430  "r_mbytes_per_sec": 0,
00:04:33.430  "rw_ios_per_sec": 0,
00:04:33.430  "rw_mbytes_per_sec": 0,
00:04:33.430  "w_mbytes_per_sec": 0
00:04:33.430  },
00:04:33.430  "block_size": 512,
00:04:33.430  "claimed": false,
00:04:33.430  "driver_specific": {
00:04:33.430  "passthru": {
00:04:33.430  "base_bdev_name": "Malloc3",
00:04:33.430  "name": "Passthru0"
00:04:33.430  }
00:04:33.430  },
00:04:33.430  "memory_domains": [
00:04:33.430  {
00:04:33.430  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:33.430  "dma_device_type": 2
00:04:33.430  }
00:04:33.430  ],
00:04:33.430  "name": "Passthru0",
00:04:33.430  "num_blocks": 16384,
00:04:33.430  "product_name": "passthru",
00:04:33.430  "supported_io_types": {
00:04:33.430  "abort": true,
00:04:33.430  "compare": false,
00:04:33.431  "compare_and_write": false,
00:04:33.431  "flush": true,
00:04:33.431  "nvme_admin": false,
00:04:33.431  "nvme_io": false,
00:04:33.431  "read": true,
00:04:33.431  "reset": true,
00:04:33.431  "unmap": true,
00:04:33.431  "write": true,
00:04:33.431  "write_zeroes": true
00:04:33.431  },
00:04:33.431  "uuid": "46faa14f-ae44-587a-b046-46a83c978c4d",
00:04:33.431  "zoned": false
00:04:33.431  }
00:04:33.431  ]'
00:04:33.431    06:14:50	-- rpc/rpc.sh@21 -- # jq length
00:04:33.431   06:14:50	-- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:04:33.431   06:14:50	-- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:04:33.431   06:14:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:33.431   06:14:50	-- common/autotest_common.sh@10 -- # set +x
00:04:33.431   06:14:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:33.431   06:14:50	-- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3
00:04:33.431   06:14:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:33.431   06:14:50	-- common/autotest_common.sh@10 -- # set +x
00:04:33.431   06:14:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:33.431    06:14:50	-- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:04:33.431    06:14:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:33.431    06:14:50	-- common/autotest_common.sh@10 -- # set +x
00:04:33.431    06:14:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:33.431   06:14:50	-- rpc/rpc.sh@25 -- # bdevs='[]'
00:04:33.431    06:14:50	-- rpc/rpc.sh@26 -- # jq length
00:04:33.690   06:14:50	-- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:04:33.690  
00:04:33.690  real	0m0.319s
00:04:33.690  user	0m0.208s
00:04:33.690  sys	0m0.042s
00:04:33.690   06:14:50	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:33.690   06:14:50	-- common/autotest_common.sh@10 -- # set +x
00:04:33.690  ************************************
00:04:33.690  END TEST rpc_daemon_integrity
00:04:33.690  ************************************
00:04:33.690   06:14:50	-- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:04:33.690   06:14:50	-- rpc/rpc.sh@84 -- # killprocess 55495
00:04:33.690   06:14:50	-- common/autotest_common.sh@936 -- # '[' -z 55495 ']'
00:04:33.690   06:14:50	-- common/autotest_common.sh@940 -- # kill -0 55495
00:04:33.690    06:14:50	-- common/autotest_common.sh@941 -- # uname
00:04:33.690   06:14:50	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:04:33.690    06:14:50	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55495
00:04:33.690   06:14:50	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:04:33.690   06:14:50	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:04:33.690  killing process with pid 55495
00:04:33.690   06:14:50	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 55495'
00:04:33.690   06:14:50	-- common/autotest_common.sh@955 -- # kill 55495
00:04:33.690   06:14:50	-- common/autotest_common.sh@960 -- # wait 55495
00:04:33.949  
00:04:33.949  real	0m3.239s
00:04:33.949  user	0m4.264s
00:04:33.949  sys	0m0.780s
00:04:33.949   06:14:50	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:33.949   06:14:50	-- common/autotest_common.sh@10 -- # set +x
00:04:33.949  ************************************
00:04:33.949  END TEST rpc
00:04:33.949  ************************************
00:04:34.208   06:14:50	-- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh
00:04:34.208   06:14:50	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:34.208   06:14:50	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:34.208   06:14:50	-- common/autotest_common.sh@10 -- # set +x
00:04:34.208  ************************************
00:04:34.208  START TEST rpc_client
00:04:34.208  ************************************
00:04:34.208   06:14:50	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh
00:04:34.208  * Looking for test storage...
00:04:34.208  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client
00:04:34.208    06:14:51	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:04:34.208     06:14:51	-- common/autotest_common.sh@1690 -- # lcov --version
00:04:34.208     06:14:51	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:04:34.208    06:14:51	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:04:34.208    06:14:51	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:04:34.208    06:14:51	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:04:34.208    06:14:51	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:04:34.208    06:14:51	-- scripts/common.sh@335 -- # IFS=.-:
00:04:34.208    06:14:51	-- scripts/common.sh@335 -- # read -ra ver1
00:04:34.208    06:14:51	-- scripts/common.sh@336 -- # IFS=.-:
00:04:34.208    06:14:51	-- scripts/common.sh@336 -- # read -ra ver2
00:04:34.208    06:14:51	-- scripts/common.sh@337 -- # local 'op=<'
00:04:34.208    06:14:51	-- scripts/common.sh@339 -- # ver1_l=2
00:04:34.208    06:14:51	-- scripts/common.sh@340 -- # ver2_l=1
00:04:34.208    06:14:51	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:04:34.208    06:14:51	-- scripts/common.sh@343 -- # case "$op" in
00:04:34.208    06:14:51	-- scripts/common.sh@344 -- # : 1
00:04:34.208    06:14:51	-- scripts/common.sh@363 -- # (( v = 0 ))
00:04:34.208    06:14:51	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:34.208     06:14:51	-- scripts/common.sh@364 -- # decimal 1
00:04:34.208     06:14:51	-- scripts/common.sh@352 -- # local d=1
00:04:34.208     06:14:51	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:34.208     06:14:51	-- scripts/common.sh@354 -- # echo 1
00:04:34.208    06:14:51	-- scripts/common.sh@364 -- # ver1[v]=1
00:04:34.208     06:14:51	-- scripts/common.sh@365 -- # decimal 2
00:04:34.208     06:14:51	-- scripts/common.sh@352 -- # local d=2
00:04:34.208     06:14:51	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:34.208     06:14:51	-- scripts/common.sh@354 -- # echo 2
00:04:34.208    06:14:51	-- scripts/common.sh@365 -- # ver2[v]=2
00:04:34.208    06:14:51	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:04:34.208    06:14:51	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:04:34.208    06:14:51	-- scripts/common.sh@367 -- # return 0
00:04:34.208    06:14:51	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:34.208    06:14:51	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:04:34.208  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:34.208  		--rc genhtml_branch_coverage=1
00:04:34.208  		--rc genhtml_function_coverage=1
00:04:34.208  		--rc genhtml_legend=1
00:04:34.208  		--rc geninfo_all_blocks=1
00:04:34.208  		--rc geninfo_unexecuted_blocks=1
00:04:34.208  		
00:04:34.208  		'
00:04:34.208    06:14:51	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:04:34.208  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:34.208  		--rc genhtml_branch_coverage=1
00:04:34.208  		--rc genhtml_function_coverage=1
00:04:34.208  		--rc genhtml_legend=1
00:04:34.208  		--rc geninfo_all_blocks=1
00:04:34.208  		--rc geninfo_unexecuted_blocks=1
00:04:34.208  		
00:04:34.208  		'
00:04:34.208    06:14:51	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:04:34.208  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:34.208  		--rc genhtml_branch_coverage=1
00:04:34.208  		--rc genhtml_function_coverage=1
00:04:34.208  		--rc genhtml_legend=1
00:04:34.208  		--rc geninfo_all_blocks=1
00:04:34.208  		--rc geninfo_unexecuted_blocks=1
00:04:34.208  		
00:04:34.208  		'
00:04:34.208    06:14:51	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:04:34.208  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:34.208  		--rc genhtml_branch_coverage=1
00:04:34.208  		--rc genhtml_function_coverage=1
00:04:34.208  		--rc genhtml_legend=1
00:04:34.208  		--rc geninfo_all_blocks=1
00:04:34.208  		--rc geninfo_unexecuted_blocks=1
00:04:34.208  		
00:04:34.208  		'
00:04:34.208   06:14:51	-- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test
00:04:34.208  OK
00:04:34.208   06:14:51	-- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT
00:04:34.208  
00:04:34.208  real	0m0.199s
00:04:34.208  user	0m0.133s
00:04:34.208  sys	0m0.078s
00:04:34.208   06:14:51	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:34.208   06:14:51	-- common/autotest_common.sh@10 -- # set +x
00:04:34.208  ************************************
00:04:34.208  END TEST rpc_client
00:04:34.208  ************************************
00:04:34.468   06:14:51	-- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh
00:04:34.468   06:14:51	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:34.468   06:14:51	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:34.468   06:14:51	-- common/autotest_common.sh@10 -- # set +x
00:04:34.468  ************************************
00:04:34.468  START TEST json_config
00:04:34.468  ************************************
00:04:34.468   06:14:51	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh
00:04:34.468    06:14:51	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:04:34.468     06:14:51	-- common/autotest_common.sh@1690 -- # lcov --version
00:04:34.468     06:14:51	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:04:34.468    06:14:51	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:04:34.468    06:14:51	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:04:34.468    06:14:51	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:04:34.468    06:14:51	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:04:34.468    06:14:51	-- scripts/common.sh@335 -- # IFS=.-:
00:04:34.468    06:14:51	-- scripts/common.sh@335 -- # read -ra ver1
00:04:34.468    06:14:51	-- scripts/common.sh@336 -- # IFS=.-:
00:04:34.468    06:14:51	-- scripts/common.sh@336 -- # read -ra ver2
00:04:34.468    06:14:51	-- scripts/common.sh@337 -- # local 'op=<'
00:04:34.468    06:14:51	-- scripts/common.sh@339 -- # ver1_l=2
00:04:34.468    06:14:51	-- scripts/common.sh@340 -- # ver2_l=1
00:04:34.468    06:14:51	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:04:34.468    06:14:51	-- scripts/common.sh@343 -- # case "$op" in
00:04:34.468    06:14:51	-- scripts/common.sh@344 -- # : 1
00:04:34.468    06:14:51	-- scripts/common.sh@363 -- # (( v = 0 ))
00:04:34.468    06:14:51	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:34.468     06:14:51	-- scripts/common.sh@364 -- # decimal 1
00:04:34.468     06:14:51	-- scripts/common.sh@352 -- # local d=1
00:04:34.468     06:14:51	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:34.468     06:14:51	-- scripts/common.sh@354 -- # echo 1
00:04:34.468    06:14:51	-- scripts/common.sh@364 -- # ver1[v]=1
00:04:34.468     06:14:51	-- scripts/common.sh@365 -- # decimal 2
00:04:34.468     06:14:51	-- scripts/common.sh@352 -- # local d=2
00:04:34.468     06:14:51	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:34.468     06:14:51	-- scripts/common.sh@354 -- # echo 2
00:04:34.468    06:14:51	-- scripts/common.sh@365 -- # ver2[v]=2
00:04:34.468    06:14:51	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:04:34.468    06:14:51	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:04:34.468    06:14:51	-- scripts/common.sh@367 -- # return 0
00:04:34.468    06:14:51	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:34.468    06:14:51	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:04:34.468  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:34.468  		--rc genhtml_branch_coverage=1
00:04:34.468  		--rc genhtml_function_coverage=1
00:04:34.468  		--rc genhtml_legend=1
00:04:34.468  		--rc geninfo_all_blocks=1
00:04:34.468  		--rc geninfo_unexecuted_blocks=1
00:04:34.468  		
00:04:34.468  		'
00:04:34.468    06:14:51	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:04:34.468  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:34.468  		--rc genhtml_branch_coverage=1
00:04:34.468  		--rc genhtml_function_coverage=1
00:04:34.468  		--rc genhtml_legend=1
00:04:34.468  		--rc geninfo_all_blocks=1
00:04:34.468  		--rc geninfo_unexecuted_blocks=1
00:04:34.468  		
00:04:34.468  		'
00:04:34.468    06:14:51	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:04:34.468  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:34.468  		--rc genhtml_branch_coverage=1
00:04:34.468  		--rc genhtml_function_coverage=1
00:04:34.468  		--rc genhtml_legend=1
00:04:34.468  		--rc geninfo_all_blocks=1
00:04:34.468  		--rc geninfo_unexecuted_blocks=1
00:04:34.468  		
00:04:34.468  		'
00:04:34.468    06:14:51	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:04:34.468  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:34.468  		--rc genhtml_branch_coverage=1
00:04:34.468  		--rc genhtml_function_coverage=1
00:04:34.468  		--rc genhtml_legend=1
00:04:34.468  		--rc geninfo_all_blocks=1
00:04:34.468  		--rc geninfo_unexecuted_blocks=1
00:04:34.468  		
00:04:34.468  		'
00:04:34.468   06:14:51	-- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:04:34.468     06:14:51	-- nvmf/common.sh@7 -- # uname -s
00:04:34.468    06:14:51	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:04:34.468    06:14:51	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:04:34.468    06:14:51	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:04:34.468    06:14:51	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:04:34.468    06:14:51	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:04:34.468    06:14:51	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:04:34.468    06:14:51	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:04:34.468    06:14:51	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:04:34.468    06:14:51	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:04:34.468     06:14:51	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:04:34.468    06:14:51	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:04:34.468    06:14:51	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:04:34.468    06:14:51	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:04:34.468    06:14:51	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:04:34.468    06:14:51	-- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:04:34.468    06:14:51	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:04:34.468     06:14:51	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:04:34.468     06:14:51	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:04:34.468     06:14:51	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:04:34.468      06:14:51	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:34.468      06:14:51	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:34.468      06:14:51	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:34.468      06:14:51	-- paths/export.sh@5 -- # export PATH
00:04:34.468      06:14:51	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:34.468    06:14:51	-- nvmf/common.sh@46 -- # : 0
00:04:34.468    06:14:51	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:04:34.468    06:14:51	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:04:34.468    06:14:51	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:04:34.468    06:14:51	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:04:34.468    06:14:51	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:04:34.468    06:14:51	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:04:34.468    06:14:51	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:04:34.468    06:14:51	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:04:34.468   06:14:51	-- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]]
00:04:34.468   06:14:51	-- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]]
00:04:34.468   06:14:51	-- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]]
00:04:34.468   06:14:51	-- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + 	SPDK_TEST_ISCSI + 	SPDK_TEST_NVMF + 	SPDK_TEST_VHOST + 	SPDK_TEST_VHOST_INIT + 	SPDK_TEST_RBD == 0 ))
00:04:34.468   06:14:51	-- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='')
00:04:34.468   06:14:51	-- json_config/json_config.sh@30 -- # declare -A app_pid
00:04:34.468   06:14:51	-- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock')
00:04:34.468   06:14:51	-- json_config/json_config.sh@31 -- # declare -A app_socket
00:04:34.468   06:14:51	-- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024')
00:04:34.468   06:14:51	-- json_config/json_config.sh@32 -- # declare -A app_params
00:04:34.468   06:14:51	-- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json')
00:04:34.468   06:14:51	-- json_config/json_config.sh@33 -- # declare -A configs_path
00:04:34.468   06:14:51	-- json_config/json_config.sh@43 -- # last_event_id=0
00:04:34.468   06:14:51	-- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:04:34.468  INFO: JSON configuration test init
00:04:34.468   06:14:51	-- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init'
00:04:34.468   06:14:51	-- json_config/json_config.sh@420 -- # json_config_test_init
00:04:34.468   06:14:51	-- json_config/json_config.sh@315 -- # timing_enter json_config_test_init
00:04:34.468   06:14:51	-- common/autotest_common.sh@722 -- # xtrace_disable
00:04:34.468   06:14:51	-- common/autotest_common.sh@10 -- # set +x
00:04:34.468   06:14:51	-- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target
00:04:34.468   06:14:51	-- common/autotest_common.sh@722 -- # xtrace_disable
00:04:34.468   06:14:51	-- common/autotest_common.sh@10 -- # set +x
00:04:34.468   06:14:51	-- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc
00:04:34.468   06:14:51	-- json_config/json_config.sh@98 -- # local app=target
00:04:34.468   06:14:51	-- json_config/json_config.sh@99 -- # shift
00:04:34.468   06:14:51	-- json_config/json_config.sh@101 -- # [[ -n 22 ]]
00:04:34.468   06:14:51	-- json_config/json_config.sh@102 -- # [[ -z '' ]]
00:04:34.468   06:14:51	-- json_config/json_config.sh@104 -- # local app_extra_params=
00:04:34.469   06:14:51	-- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]]
00:04:34.469   06:14:51	-- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]]
00:04:34.469   06:14:51	-- json_config/json_config.sh@111 -- # app_pid[$app]=55812
00:04:34.469  Waiting for target to run...
00:04:34.469   06:14:51	-- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...'
00:04:34.469   06:14:51	-- json_config/json_config.sh@114 -- # waitforlisten 55812 /var/tmp/spdk_tgt.sock
00:04:34.469   06:14:51	-- common/autotest_common.sh@829 -- # '[' -z 55812 ']'
00:04:34.469   06:14:51	-- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc
00:04:34.469   06:14:51	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:04:34.469   06:14:51	-- common/autotest_common.sh@834 -- # local max_retries=100
00:04:34.469  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:04:34.469   06:14:51	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:04:34.469   06:14:51	-- common/autotest_common.sh@838 -- # xtrace_disable
00:04:34.469   06:14:51	-- common/autotest_common.sh@10 -- # set +x
00:04:34.727  [2024-12-16 06:14:51.473177] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:04:34.727  [2024-12-16 06:14:51.473994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55812 ]
00:04:34.986  [2024-12-16 06:14:51.891649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:35.244  [2024-12-16 06:14:51.965713] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:04:35.244  [2024-12-16 06:14:51.966027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:04:35.811   06:14:52	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:04:35.811   06:14:52	-- common/autotest_common.sh@862 -- # return 0
00:04:35.811  
00:04:35.811   06:14:52	-- json_config/json_config.sh@115 -- # echo ''
00:04:35.811   06:14:52	-- json_config/json_config.sh@322 -- # create_accel_config
00:04:35.811   06:14:52	-- json_config/json_config.sh@146 -- # timing_enter create_accel_config
00:04:35.811   06:14:52	-- common/autotest_common.sh@722 -- # xtrace_disable
00:04:35.811   06:14:52	-- common/autotest_common.sh@10 -- # set +x
00:04:35.811   06:14:52	-- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]]
00:04:35.811   06:14:52	-- json_config/json_config.sh@154 -- # timing_exit create_accel_config
00:04:35.811   06:14:52	-- common/autotest_common.sh@728 -- # xtrace_disable
00:04:35.811   06:14:52	-- common/autotest_common.sh@10 -- # set +x
00:04:35.811   06:14:52	-- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems
00:04:35.811   06:14:52	-- json_config/json_config.sh@327 -- # tgt_rpc load_config
00:04:35.811   06:14:52	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config
00:04:36.070   06:14:53	-- json_config/json_config.sh@329 -- # tgt_check_notification_types
00:04:36.070   06:14:53	-- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types
00:04:36.070   06:14:53	-- common/autotest_common.sh@722 -- # xtrace_disable
00:04:36.070   06:14:53	-- common/autotest_common.sh@10 -- # set +x
00:04:36.070   06:14:53	-- json_config/json_config.sh@48 -- # local ret=0
00:04:36.070   06:14:53	-- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister')
00:04:36.070   06:14:53	-- json_config/json_config.sh@49 -- # local enabled_types
00:04:36.070    06:14:53	-- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types
00:04:36.070    06:14:53	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types
00:04:36.070    06:14:53	-- json_config/json_config.sh@51 -- # jq -r '.[]'
00:04:36.329   06:14:53	-- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister')
00:04:36.329   06:14:53	-- json_config/json_config.sh@51 -- # local get_types
00:04:36.329   06:14:53	-- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]]
00:04:36.329   06:14:53	-- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types
00:04:36.329   06:14:53	-- common/autotest_common.sh@728 -- # xtrace_disable
00:04:36.329   06:14:53	-- common/autotest_common.sh@10 -- # set +x
00:04:36.587   06:14:53	-- json_config/json_config.sh@58 -- # return 0
00:04:36.587   06:14:53	-- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]]
00:04:36.587   06:14:53	-- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]]
00:04:36.587   06:14:53	-- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]]
00:04:36.587   06:14:53	-- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]]
00:04:36.587   06:14:53	-- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config
00:04:36.587   06:14:53	-- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config
00:04:36.587   06:14:53	-- common/autotest_common.sh@722 -- # xtrace_disable
00:04:36.587   06:14:53	-- common/autotest_common.sh@10 -- # set +x
00:04:36.587   06:14:53	-- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1
00:04:36.587   06:14:53	-- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]]
00:04:36.587   06:14:53	-- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]]
00:04:36.587   06:14:53	-- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0
00:04:36.587   06:14:53	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0
00:04:36.846  MallocForNvmf0
00:04:36.846   06:14:53	-- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1
00:04:36.846   06:14:53	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1
00:04:37.104  MallocForNvmf1
00:04:37.104   06:14:53	-- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0
00:04:37.104   06:14:53	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0
00:04:37.104  [2024-12-16 06:14:54.041637] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:04:37.104   06:14:54	-- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:04:37.104   06:14:54	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:04:37.362   06:14:54	-- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0
00:04:37.362   06:14:54	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0
00:04:37.621   06:14:54	-- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1
00:04:37.621   06:14:54	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1
00:04:37.879   06:14:54	-- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420
00:04:37.879   06:14:54	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420
00:04:38.138  [2024-12-16 06:14:54.930086] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:04:38.138   06:14:54	-- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config
00:04:38.138   06:14:54	-- common/autotest_common.sh@728 -- # xtrace_disable
00:04:38.138   06:14:54	-- common/autotest_common.sh@10 -- # set +x
00:04:38.138   06:14:54	-- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target
00:04:38.138   06:14:54	-- common/autotest_common.sh@728 -- # xtrace_disable
00:04:38.138   06:14:54	-- common/autotest_common.sh@10 -- # set +x
00:04:38.138   06:14:55	-- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]]
00:04:38.138   06:14:55	-- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck
00:04:38.138   06:14:55	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck
00:04:38.396  MallocBdevForConfigChangeCheck
00:04:38.396   06:14:55	-- json_config/json_config.sh@355 -- # timing_exit json_config_test_init
00:04:38.396   06:14:55	-- common/autotest_common.sh@728 -- # xtrace_disable
00:04:38.396   06:14:55	-- common/autotest_common.sh@10 -- # set +x
00:04:38.396   06:14:55	-- json_config/json_config.sh@422 -- # tgt_rpc save_config
00:04:38.396   06:14:55	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:04:38.963  INFO: shutting down applications...
00:04:38.963   06:14:55	-- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...'
00:04:38.963   06:14:55	-- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]]
00:04:38.963   06:14:55	-- json_config/json_config.sh@431 -- # json_config_clear target
00:04:38.963   06:14:55	-- json_config/json_config.sh@385 -- # [[ -n 22 ]]
00:04:38.963   06:14:55	-- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config
00:04:39.222  Calling clear_iscsi_subsystem
00:04:39.222  Calling clear_nvmf_subsystem
00:04:39.222  Calling clear_nbd_subsystem
00:04:39.222  Calling clear_ublk_subsystem
00:04:39.222  Calling clear_vhost_blk_subsystem
00:04:39.222  Calling clear_vhost_scsi_subsystem
00:04:39.222  Calling clear_scheduler_subsystem
00:04:39.222  Calling clear_bdev_subsystem
00:04:39.222  Calling clear_accel_subsystem
00:04:39.222  Calling clear_vmd_subsystem
00:04:39.222  Calling clear_sock_subsystem
00:04:39.222  Calling clear_iobuf_subsystem
00:04:39.222   06:14:56	-- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py
00:04:39.222   06:14:56	-- json_config/json_config.sh@396 -- # count=100
00:04:39.222   06:14:56	-- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']'
00:04:39.222   06:14:56	-- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty
00:04:39.222   06:14:56	-- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:04:39.222   06:14:56	-- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters
00:04:39.790   06:14:56	-- json_config/json_config.sh@398 -- # break
00:04:39.790   06:14:56	-- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']'
00:04:39.790   06:14:56	-- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target
00:04:39.790   06:14:56	-- json_config/json_config.sh@120 -- # local app=target
00:04:39.790   06:14:56	-- json_config/json_config.sh@123 -- # [[ -n 22 ]]
00:04:39.790   06:14:56	-- json_config/json_config.sh@124 -- # [[ -n 55812 ]]
00:04:39.790   06:14:56	-- json_config/json_config.sh@127 -- # kill -SIGINT 55812
00:04:39.790   06:14:56	-- json_config/json_config.sh@129 -- # (( i = 0 ))
00:04:39.790   06:14:56	-- json_config/json_config.sh@129 -- # (( i < 30 ))
00:04:39.790   06:14:56	-- json_config/json_config.sh@130 -- # kill -0 55812
00:04:39.790   06:14:56	-- json_config/json_config.sh@134 -- # sleep 0.5
00:04:40.049   06:14:56	-- json_config/json_config.sh@129 -- # (( i++ ))
00:04:40.049   06:14:56	-- json_config/json_config.sh@129 -- # (( i < 30 ))
00:04:40.049   06:14:56	-- json_config/json_config.sh@130 -- # kill -0 55812
00:04:40.049   06:14:56	-- json_config/json_config.sh@131 -- # app_pid[$app]=
00:04:40.049   06:14:56	-- json_config/json_config.sh@132 -- # break
00:04:40.049   06:14:56	-- json_config/json_config.sh@137 -- # [[ -n '' ]]
00:04:40.049  SPDK target shutdown done
00:04:40.049   06:14:56	-- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done'
00:04:40.049  INFO: relaunching applications...
00:04:40.049   06:14:56	-- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...'
00:04:40.049   06:14:56	-- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:04:40.049   06:14:56	-- json_config/json_config.sh@98 -- # local app=target
00:04:40.049   06:14:56	-- json_config/json_config.sh@99 -- # shift
00:04:40.049   06:14:56	-- json_config/json_config.sh@101 -- # [[ -n 22 ]]
00:04:40.049   06:14:56	-- json_config/json_config.sh@102 -- # [[ -z '' ]]
00:04:40.049   06:14:56	-- json_config/json_config.sh@104 -- # local app_extra_params=
00:04:40.049   06:14:56	-- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]]
00:04:40.049   06:14:56	-- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]]
00:04:40.049   06:14:56	-- json_config/json_config.sh@111 -- # app_pid[$app]=56091
00:04:40.049   06:14:56	-- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...'
00:04:40.049   06:14:56	-- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:04:40.049  Waiting for target to run...
00:04:40.049   06:14:56	-- json_config/json_config.sh@114 -- # waitforlisten 56091 /var/tmp/spdk_tgt.sock
00:04:40.049   06:14:56	-- common/autotest_common.sh@829 -- # '[' -z 56091 ']'
00:04:40.049   06:14:56	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:04:40.049   06:14:56	-- common/autotest_common.sh@834 -- # local max_retries=100
00:04:40.049  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:04:40.049   06:14:57	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:04:40.049   06:14:57	-- common/autotest_common.sh@838 -- # xtrace_disable
00:04:40.049   06:14:57	-- common/autotest_common.sh@10 -- # set +x
00:04:40.309  [2024-12-16 06:14:57.052677] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:04:40.309  [2024-12-16 06:14:57.052766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56091 ]
00:04:40.573  [2024-12-16 06:14:57.454837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:40.573  [2024-12-16 06:14:57.529629] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:04:40.573  [2024-12-16 06:14:57.529911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:04:41.164  [2024-12-16 06:14:57.831913] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:04:41.164  [2024-12-16 06:14:57.864014] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:04:41.164   06:14:58	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:04:41.164   06:14:58	-- common/autotest_common.sh@862 -- # return 0
00:04:41.164  
00:04:41.164   06:14:58	-- json_config/json_config.sh@115 -- # echo ''
00:04:41.164   06:14:58	-- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]]
00:04:41.164  INFO: Checking if target configuration is the same...
00:04:41.164   06:14:58	-- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...'
00:04:41.164   06:14:58	-- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:04:41.164    06:14:58	-- json_config/json_config.sh@441 -- # tgt_rpc save_config
00:04:41.164    06:14:58	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:04:41.164  + '[' 2 -ne 2 ']'
00:04:41.164  +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh
00:04:41.164  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../..
00:04:41.164  + rootdir=/home/vagrant/spdk_repo/spdk
00:04:41.164  +++ basename /dev/fd/62
00:04:41.164  ++ mktemp /tmp/62.XXX
00:04:41.164  + tmp_file_1=/tmp/62.VtR
00:04:41.164  +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:04:41.164  ++ mktemp /tmp/spdk_tgt_config.json.XXX
00:04:41.164  + tmp_file_2=/tmp/spdk_tgt_config.json.eW0
00:04:41.164  + ret=0
00:04:41.164  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:04:41.422  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:04:41.681  + diff -u /tmp/62.VtR /tmp/spdk_tgt_config.json.eW0
00:04:41.682  INFO: JSON config files are the same
00:04:41.682  + echo 'INFO: JSON config files are the same'
00:04:41.682  + rm /tmp/62.VtR /tmp/spdk_tgt_config.json.eW0
00:04:41.682  + exit 0
00:04:41.682   06:14:58	-- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]]
00:04:41.682  INFO: changing configuration and checking if this can be detected...
00:04:41.682   06:14:58	-- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...'
00:04:41.682   06:14:58	-- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck
00:04:41.682   06:14:58	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck
00:04:41.940   06:14:58	-- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:04:41.940    06:14:58	-- json_config/json_config.sh@450 -- # tgt_rpc save_config
00:04:41.940    06:14:58	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:04:41.940  + '[' 2 -ne 2 ']'
00:04:41.940  +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh
00:04:41.940  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../..
00:04:41.940  + rootdir=/home/vagrant/spdk_repo/spdk
00:04:41.940  +++ basename /dev/fd/62
00:04:41.940  ++ mktemp /tmp/62.XXX
00:04:41.940  + tmp_file_1=/tmp/62.thC
00:04:41.940  +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:04:41.940  ++ mktemp /tmp/spdk_tgt_config.json.XXX
00:04:41.940  + tmp_file_2=/tmp/spdk_tgt_config.json.3XD
00:04:41.940  + ret=0
00:04:41.940  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:04:42.199  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:04:42.199  + diff -u /tmp/62.thC /tmp/spdk_tgt_config.json.3XD
00:04:42.199  + ret=1
00:04:42.199  + echo '=== Start of file: /tmp/62.thC ==='
00:04:42.199  + cat /tmp/62.thC
00:04:42.199  + echo '=== End of file: /tmp/62.thC ==='
00:04:42.199  + echo ''
00:04:42.199  + echo '=== Start of file: /tmp/spdk_tgt_config.json.3XD ==='
00:04:42.199  + cat /tmp/spdk_tgt_config.json.3XD
00:04:42.199  + echo '=== End of file: /tmp/spdk_tgt_config.json.3XD ==='
00:04:42.199  + echo ''
00:04:42.199  + rm /tmp/62.thC /tmp/spdk_tgt_config.json.3XD
00:04:42.199  + exit 1
00:04:42.199  INFO: configuration change detected.
00:04:42.199   06:14:59	-- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.'
00:04:42.199   06:14:59	-- json_config/json_config.sh@457 -- # json_config_test_fini
00:04:42.199   06:14:59	-- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini
00:04:42.199   06:14:59	-- common/autotest_common.sh@722 -- # xtrace_disable
00:04:42.199   06:14:59	-- common/autotest_common.sh@10 -- # set +x
00:04:42.199   06:14:59	-- json_config/json_config.sh@360 -- # local ret=0
00:04:42.199   06:14:59	-- json_config/json_config.sh@362 -- # [[ -n '' ]]
00:04:42.199   06:14:59	-- json_config/json_config.sh@370 -- # [[ -n 56091 ]]
00:04:42.199   06:14:59	-- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config
00:04:42.199   06:14:59	-- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config
00:04:42.199   06:14:59	-- common/autotest_common.sh@722 -- # xtrace_disable
00:04:42.199   06:14:59	-- common/autotest_common.sh@10 -- # set +x
00:04:42.199   06:14:59	-- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]]
00:04:42.199    06:14:59	-- json_config/json_config.sh@246 -- # uname -s
00:04:42.199   06:14:59	-- json_config/json_config.sh@246 -- # [[ Linux = Linux ]]
00:04:42.200   06:14:59	-- json_config/json_config.sh@247 -- # rm -f /sample_aio
00:04:42.200   06:14:59	-- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]]
00:04:42.200   06:14:59	-- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config
00:04:42.200   06:14:59	-- common/autotest_common.sh@728 -- # xtrace_disable
00:04:42.200   06:14:59	-- common/autotest_common.sh@10 -- # set +x
00:04:42.200   06:14:59	-- json_config/json_config.sh@376 -- # killprocess 56091
00:04:42.200   06:14:59	-- common/autotest_common.sh@936 -- # '[' -z 56091 ']'
00:04:42.200   06:14:59	-- common/autotest_common.sh@940 -- # kill -0 56091
00:04:42.200    06:14:59	-- common/autotest_common.sh@941 -- # uname
00:04:42.200   06:14:59	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:04:42.200    06:14:59	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56091
00:04:42.458   06:14:59	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:04:42.458   06:14:59	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:04:42.458  killing process with pid 56091
00:04:42.458   06:14:59	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 56091'
00:04:42.458   06:14:59	-- common/autotest_common.sh@955 -- # kill 56091
00:04:42.458   06:14:59	-- common/autotest_common.sh@960 -- # wait 56091
00:04:42.458   06:14:59	-- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:04:42.458   06:14:59	-- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini
00:04:42.458   06:14:59	-- common/autotest_common.sh@728 -- # xtrace_disable
00:04:42.458   06:14:59	-- common/autotest_common.sh@10 -- # set +x
00:04:42.717   06:14:59	-- json_config/json_config.sh@381 -- # return 0
00:04:42.717  INFO: Success
00:04:42.717   06:14:59	-- json_config/json_config.sh@459 -- # echo 'INFO: Success'
00:04:42.717  
00:04:42.717  real	0m8.250s
00:04:42.717  user	0m11.742s
00:04:42.717  sys	0m1.760s
00:04:42.717   06:14:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:42.717   06:14:59	-- common/autotest_common.sh@10 -- # set +x
00:04:42.717  ************************************
00:04:42.717  END TEST json_config
00:04:42.717  ************************************
00:04:42.717   06:14:59	-- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh
00:04:42.717   06:14:59	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:42.717   06:14:59	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:42.717   06:14:59	-- common/autotest_common.sh@10 -- # set +x
00:04:42.717  ************************************
00:04:42.717  START TEST json_config_extra_key
00:04:42.717  ************************************
00:04:42.717   06:14:59	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh
00:04:42.717    06:14:59	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:04:42.717     06:14:59	-- common/autotest_common.sh@1690 -- # lcov --version
00:04:42.717     06:14:59	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:04:42.717    06:14:59	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:04:42.717    06:14:59	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:04:42.717    06:14:59	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:04:42.717    06:14:59	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:04:42.717    06:14:59	-- scripts/common.sh@335 -- # IFS=.-:
00:04:42.717    06:14:59	-- scripts/common.sh@335 -- # read -ra ver1
00:04:42.717    06:14:59	-- scripts/common.sh@336 -- # IFS=.-:
00:04:42.717    06:14:59	-- scripts/common.sh@336 -- # read -ra ver2
00:04:42.717    06:14:59	-- scripts/common.sh@337 -- # local 'op=<'
00:04:42.717    06:14:59	-- scripts/common.sh@339 -- # ver1_l=2
00:04:42.717    06:14:59	-- scripts/common.sh@340 -- # ver2_l=1
00:04:42.717    06:14:59	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:04:42.717    06:14:59	-- scripts/common.sh@343 -- # case "$op" in
00:04:42.717    06:14:59	-- scripts/common.sh@344 -- # : 1
00:04:42.717    06:14:59	-- scripts/common.sh@363 -- # (( v = 0 ))
00:04:42.717    06:14:59	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:42.717     06:14:59	-- scripts/common.sh@364 -- # decimal 1
00:04:42.717     06:14:59	-- scripts/common.sh@352 -- # local d=1
00:04:42.717     06:14:59	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:42.717     06:14:59	-- scripts/common.sh@354 -- # echo 1
00:04:42.717    06:14:59	-- scripts/common.sh@364 -- # ver1[v]=1
00:04:42.717     06:14:59	-- scripts/common.sh@365 -- # decimal 2
00:04:42.717     06:14:59	-- scripts/common.sh@352 -- # local d=2
00:04:42.717     06:14:59	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:42.717     06:14:59	-- scripts/common.sh@354 -- # echo 2
00:04:42.717    06:14:59	-- scripts/common.sh@365 -- # ver2[v]=2
00:04:42.717    06:14:59	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:04:42.717    06:14:59	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:04:42.717    06:14:59	-- scripts/common.sh@367 -- # return 0
00:04:42.717    06:14:59	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:42.717    06:14:59	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:04:42.717  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:42.717  		--rc genhtml_branch_coverage=1
00:04:42.717  		--rc genhtml_function_coverage=1
00:04:42.717  		--rc genhtml_legend=1
00:04:42.717  		--rc geninfo_all_blocks=1
00:04:42.717  		--rc geninfo_unexecuted_blocks=1
00:04:42.717  		
00:04:42.717  		'
00:04:42.717    06:14:59	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:04:42.717  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:42.717  		--rc genhtml_branch_coverage=1
00:04:42.717  		--rc genhtml_function_coverage=1
00:04:42.717  		--rc genhtml_legend=1
00:04:42.717  		--rc geninfo_all_blocks=1
00:04:42.717  		--rc geninfo_unexecuted_blocks=1
00:04:42.717  		
00:04:42.717  		'
00:04:42.717    06:14:59	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:04:42.717  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:42.717  		--rc genhtml_branch_coverage=1
00:04:42.717  		--rc genhtml_function_coverage=1
00:04:42.717  		--rc genhtml_legend=1
00:04:42.717  		--rc geninfo_all_blocks=1
00:04:42.717  		--rc geninfo_unexecuted_blocks=1
00:04:42.717  		
00:04:42.717  		'
00:04:42.717    06:14:59	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:04:42.717  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:42.717  		--rc genhtml_branch_coverage=1
00:04:42.717  		--rc genhtml_function_coverage=1
00:04:42.717  		--rc genhtml_legend=1
00:04:42.717  		--rc geninfo_all_blocks=1
00:04:42.717  		--rc geninfo_unexecuted_blocks=1
00:04:42.717  		
00:04:42.717  		'
00:04:42.717   06:14:59	-- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:04:42.717     06:14:59	-- nvmf/common.sh@7 -- # uname -s
00:04:42.717    06:14:59	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:04:42.717    06:14:59	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:04:42.717    06:14:59	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:04:42.717    06:14:59	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:04:42.718    06:14:59	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:04:42.718    06:14:59	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:04:42.718    06:14:59	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:04:42.718    06:14:59	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:04:42.718    06:14:59	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:04:42.718     06:14:59	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:04:42.718    06:14:59	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:04:42.718    06:14:59	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:04:42.718    06:14:59	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:04:42.718    06:14:59	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:04:42.718    06:14:59	-- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:04:42.718    06:14:59	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:04:42.718     06:14:59	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:04:42.718     06:14:59	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:04:42.718     06:14:59	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:04:42.718      06:14:59	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:42.718      06:14:59	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:42.718      06:14:59	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:42.718      06:14:59	-- paths/export.sh@5 -- # export PATH
00:04:42.718      06:14:59	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:42.718    06:14:59	-- nvmf/common.sh@46 -- # : 0
00:04:42.718    06:14:59	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:04:42.718    06:14:59	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:04:42.718    06:14:59	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:04:42.718    06:14:59	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:04:42.718    06:14:59	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:04:42.718    06:14:59	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:04:42.718    06:14:59	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:04:42.718    06:14:59	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:04:42.718   06:14:59	-- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='')
00:04:42.718   06:14:59	-- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid
00:04:42.718   06:14:59	-- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock')
00:04:42.718   06:14:59	-- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket
00:04:42.718   06:14:59	-- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024')
00:04:42.718   06:14:59	-- json_config/json_config_extra_key.sh@18 -- # declare -A app_params
00:04:42.718   06:14:59	-- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json')
00:04:42.718   06:14:59	-- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path
00:04:42.718   06:14:59	-- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:04:42.718  INFO: launching applications...
00:04:42.718   06:14:59	-- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...'
00:04:42.718   06:14:59	-- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json
00:04:42.718   06:14:59	-- json_config/json_config_extra_key.sh@24 -- # local app=target
00:04:42.718   06:14:59	-- json_config/json_config_extra_key.sh@25 -- # shift
00:04:42.718   06:14:59	-- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]]
00:04:42.718   06:14:59	-- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]]
00:04:42.718   06:14:59	-- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56264
00:04:42.718  Waiting for target to run...
00:04:42.718   06:14:59	-- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json
00:04:42.718   06:14:59	-- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...'
00:04:42.718   06:14:59	-- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56264 /var/tmp/spdk_tgt.sock
00:04:42.718   06:14:59	-- common/autotest_common.sh@829 -- # '[' -z 56264 ']'
00:04:42.718   06:14:59	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:04:42.718   06:14:59	-- common/autotest_common.sh@834 -- # local max_retries=100
00:04:42.718  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:04:42.718   06:14:59	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:04:42.718   06:14:59	-- common/autotest_common.sh@838 -- # xtrace_disable
00:04:42.718   06:14:59	-- common/autotest_common.sh@10 -- # set +x
00:04:42.977  [2024-12-16 06:14:59.744263] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:04:42.977  [2024-12-16 06:14:59.744351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56264 ]
00:04:43.234  [2024-12-16 06:15:00.139212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:43.492  [2024-12-16 06:15:00.214430] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:04:43.492  [2024-12-16 06:15:00.214572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:04:43.751   06:15:00	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:04:43.751   06:15:00	-- common/autotest_common.sh@862 -- # return 0
00:04:43.751  
00:04:43.751   06:15:00	-- json_config/json_config_extra_key.sh@35 -- # echo ''
00:04:43.751  INFO: shutting down applications...
00:04:43.751   06:15:00	-- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...'
00:04:43.751   06:15:00	-- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target
00:04:43.751   06:15:00	-- json_config/json_config_extra_key.sh@40 -- # local app=target
00:04:43.751   06:15:00	-- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]]
00:04:43.751   06:15:00	-- json_config/json_config_extra_key.sh@44 -- # [[ -n 56264 ]]
00:04:43.751   06:15:00	-- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56264
00:04:43.751   06:15:00	-- json_config/json_config_extra_key.sh@49 -- # (( i = 0 ))
00:04:43.751   06:15:00	-- json_config/json_config_extra_key.sh@49 -- # (( i < 30 ))
00:04:43.751   06:15:00	-- json_config/json_config_extra_key.sh@50 -- # kill -0 56264
00:04:43.751   06:15:00	-- json_config/json_config_extra_key.sh@54 -- # sleep 0.5
00:04:44.319   06:15:01	-- json_config/json_config_extra_key.sh@49 -- # (( i++ ))
00:04:44.319   06:15:01	-- json_config/json_config_extra_key.sh@49 -- # (( i < 30 ))
00:04:44.319   06:15:01	-- json_config/json_config_extra_key.sh@50 -- # kill -0 56264
00:04:44.319   06:15:01	-- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]=
00:04:44.319   06:15:01	-- json_config/json_config_extra_key.sh@52 -- # break
00:04:44.319   06:15:01	-- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]]
00:04:44.319  SPDK target shutdown done
00:04:44.319   06:15:01	-- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done'
00:04:44.319  Success
00:04:44.319   06:15:01	-- json_config/json_config_extra_key.sh@82 -- # echo Success
00:04:44.319  
00:04:44.319  real	0m1.708s
00:04:44.319  user	0m1.616s
00:04:44.319  sys	0m0.426s
00:04:44.319   06:15:01	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:44.319  ************************************
00:04:44.319  END TEST json_config_extra_key
00:04:44.319  ************************************
00:04:44.319   06:15:01	-- common/autotest_common.sh@10 -- # set +x
00:04:44.319   06:15:01	-- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:04:44.319   06:15:01	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:44.319   06:15:01	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:44.319   06:15:01	-- common/autotest_common.sh@10 -- # set +x
00:04:44.319  ************************************
00:04:44.319  START TEST alias_rpc
00:04:44.319  ************************************
00:04:44.319   06:15:01	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:04:44.577  * Looking for test storage...
00:04:44.577  * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc
00:04:44.577    06:15:01	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:04:44.577     06:15:01	-- common/autotest_common.sh@1690 -- # lcov --version
00:04:44.577     06:15:01	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:04:44.577    06:15:01	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:04:44.577    06:15:01	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:04:44.577    06:15:01	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:04:44.577    06:15:01	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:04:44.577    06:15:01	-- scripts/common.sh@335 -- # IFS=.-:
00:04:44.577    06:15:01	-- scripts/common.sh@335 -- # read -ra ver1
00:04:44.577    06:15:01	-- scripts/common.sh@336 -- # IFS=.-:
00:04:44.577    06:15:01	-- scripts/common.sh@336 -- # read -ra ver2
00:04:44.577    06:15:01	-- scripts/common.sh@337 -- # local 'op=<'
00:04:44.577    06:15:01	-- scripts/common.sh@339 -- # ver1_l=2
00:04:44.577    06:15:01	-- scripts/common.sh@340 -- # ver2_l=1
00:04:44.577    06:15:01	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:04:44.577    06:15:01	-- scripts/common.sh@343 -- # case "$op" in
00:04:44.577    06:15:01	-- scripts/common.sh@344 -- # : 1
00:04:44.577    06:15:01	-- scripts/common.sh@363 -- # (( v = 0 ))
00:04:44.577    06:15:01	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:44.577     06:15:01	-- scripts/common.sh@364 -- # decimal 1
00:04:44.577     06:15:01	-- scripts/common.sh@352 -- # local d=1
00:04:44.577     06:15:01	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:44.577     06:15:01	-- scripts/common.sh@354 -- # echo 1
00:04:44.577    06:15:01	-- scripts/common.sh@364 -- # ver1[v]=1
00:04:44.577     06:15:01	-- scripts/common.sh@365 -- # decimal 2
00:04:44.578     06:15:01	-- scripts/common.sh@352 -- # local d=2
00:04:44.578     06:15:01	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:44.578     06:15:01	-- scripts/common.sh@354 -- # echo 2
00:04:44.578    06:15:01	-- scripts/common.sh@365 -- # ver2[v]=2
00:04:44.578    06:15:01	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:04:44.578    06:15:01	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:04:44.578    06:15:01	-- scripts/common.sh@367 -- # return 0
00:04:44.578    06:15:01	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:44.578    06:15:01	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:04:44.578  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:44.578  		--rc genhtml_branch_coverage=1
00:04:44.578  		--rc genhtml_function_coverage=1
00:04:44.578  		--rc genhtml_legend=1
00:04:44.578  		--rc geninfo_all_blocks=1
00:04:44.578  		--rc geninfo_unexecuted_blocks=1
00:04:44.578  		
00:04:44.578  		'
00:04:44.578    06:15:01	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:04:44.578  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:44.578  		--rc genhtml_branch_coverage=1
00:04:44.578  		--rc genhtml_function_coverage=1
00:04:44.578  		--rc genhtml_legend=1
00:04:44.578  		--rc geninfo_all_blocks=1
00:04:44.578  		--rc geninfo_unexecuted_blocks=1
00:04:44.578  		
00:04:44.578  		'
00:04:44.578    06:15:01	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:04:44.578  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:44.578  		--rc genhtml_branch_coverage=1
00:04:44.578  		--rc genhtml_function_coverage=1
00:04:44.578  		--rc genhtml_legend=1
00:04:44.578  		--rc geninfo_all_blocks=1
00:04:44.578  		--rc geninfo_unexecuted_blocks=1
00:04:44.578  		
00:04:44.578  		'
00:04:44.578    06:15:01	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:04:44.578  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:44.578  		--rc genhtml_branch_coverage=1
00:04:44.578  		--rc genhtml_function_coverage=1
00:04:44.578  		--rc genhtml_legend=1
00:04:44.578  		--rc geninfo_all_blocks=1
00:04:44.578  		--rc geninfo_unexecuted_blocks=1
00:04:44.578  		
00:04:44.578  		'
00:04:44.578   06:15:01	-- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:04:44.578   06:15:01	-- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56352
00:04:44.578   06:15:01	-- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56352
00:04:44.578   06:15:01	-- common/autotest_common.sh@829 -- # '[' -z 56352 ']'
00:04:44.578   06:15:01	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:44.578   06:15:01	-- common/autotest_common.sh@834 -- # local max_retries=100
00:04:44.578  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:44.578   06:15:01	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:44.578   06:15:01	-- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:04:44.578   06:15:01	-- common/autotest_common.sh@838 -- # xtrace_disable
00:04:44.578   06:15:01	-- common/autotest_common.sh@10 -- # set +x
00:04:44.578  [2024-12-16 06:15:01.490723] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:04:44.578  [2024-12-16 06:15:01.490825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56352 ]
00:04:44.837  [2024-12-16 06:15:01.627561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:44.837  [2024-12-16 06:15:01.730272] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:04:44.837  [2024-12-16 06:15:01.730424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:04:45.774   06:15:02	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:04:45.774   06:15:02	-- common/autotest_common.sh@862 -- # return 0
00:04:45.774   06:15:02	-- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i
00:04:46.033   06:15:02	-- alias_rpc/alias_rpc.sh@19 -- # killprocess 56352
00:04:46.033   06:15:02	-- common/autotest_common.sh@936 -- # '[' -z 56352 ']'
00:04:46.033   06:15:02	-- common/autotest_common.sh@940 -- # kill -0 56352
00:04:46.033    06:15:02	-- common/autotest_common.sh@941 -- # uname
00:04:46.033   06:15:02	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:04:46.033    06:15:02	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56352
00:04:46.033   06:15:02	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:04:46.033   06:15:02	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:04:46.033  killing process with pid 56352
00:04:46.033   06:15:02	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 56352'
00:04:46.033   06:15:02	-- common/autotest_common.sh@955 -- # kill 56352
00:04:46.033   06:15:02	-- common/autotest_common.sh@960 -- # wait 56352
00:04:46.292  
00:04:46.292  real	0m1.917s
00:04:46.292  user	0m2.195s
00:04:46.292  sys	0m0.462s
00:04:46.292   06:15:03	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:46.292   06:15:03	-- common/autotest_common.sh@10 -- # set +x
00:04:46.292  ************************************
00:04:46.292  END TEST alias_rpc
00:04:46.292  ************************************
00:04:46.292   06:15:03	-- spdk/autotest.sh@169 -- # [[ 1 -eq 0 ]]
00:04:46.292   06:15:03	-- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:04:46.292   06:15:03	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:46.292   06:15:03	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:46.292   06:15:03	-- common/autotest_common.sh@10 -- # set +x
00:04:46.292  ************************************
00:04:46.292  START TEST dpdk_mem_utility
00:04:46.292  ************************************
00:04:46.292   06:15:03	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:04:46.551  * Looking for test storage...
00:04:46.551  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility
00:04:46.551    06:15:03	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:04:46.551     06:15:03	-- common/autotest_common.sh@1690 -- # lcov --version
00:04:46.551     06:15:03	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:04:46.551    06:15:03	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:04:46.551    06:15:03	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:04:46.551    06:15:03	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:04:46.551    06:15:03	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:04:46.551    06:15:03	-- scripts/common.sh@335 -- # IFS=.-:
00:04:46.551    06:15:03	-- scripts/common.sh@335 -- # read -ra ver1
00:04:46.551    06:15:03	-- scripts/common.sh@336 -- # IFS=.-:
00:04:46.551    06:15:03	-- scripts/common.sh@336 -- # read -ra ver2
00:04:46.551    06:15:03	-- scripts/common.sh@337 -- # local 'op=<'
00:04:46.551    06:15:03	-- scripts/common.sh@339 -- # ver1_l=2
00:04:46.551    06:15:03	-- scripts/common.sh@340 -- # ver2_l=1
00:04:46.551    06:15:03	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:04:46.551    06:15:03	-- scripts/common.sh@343 -- # case "$op" in
00:04:46.551    06:15:03	-- scripts/common.sh@344 -- # : 1
00:04:46.551    06:15:03	-- scripts/common.sh@363 -- # (( v = 0 ))
00:04:46.551    06:15:03	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:46.551     06:15:03	-- scripts/common.sh@364 -- # decimal 1
00:04:46.551     06:15:03	-- scripts/common.sh@352 -- # local d=1
00:04:46.551     06:15:03	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:46.551     06:15:03	-- scripts/common.sh@354 -- # echo 1
00:04:46.551    06:15:03	-- scripts/common.sh@364 -- # ver1[v]=1
00:04:46.551     06:15:03	-- scripts/common.sh@365 -- # decimal 2
00:04:46.551     06:15:03	-- scripts/common.sh@352 -- # local d=2
00:04:46.551     06:15:03	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:46.551     06:15:03	-- scripts/common.sh@354 -- # echo 2
00:04:46.551    06:15:03	-- scripts/common.sh@365 -- # ver2[v]=2
00:04:46.551    06:15:03	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:04:46.551    06:15:03	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:04:46.551    06:15:03	-- scripts/common.sh@367 -- # return 0
00:04:46.551    06:15:03	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:46.551    06:15:03	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:04:46.551  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:46.551  		--rc genhtml_branch_coverage=1
00:04:46.551  		--rc genhtml_function_coverage=1
00:04:46.551  		--rc genhtml_legend=1
00:04:46.551  		--rc geninfo_all_blocks=1
00:04:46.551  		--rc geninfo_unexecuted_blocks=1
00:04:46.551  		
00:04:46.551  		'
00:04:46.551    06:15:03	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:04:46.551  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:46.551  		--rc genhtml_branch_coverage=1
00:04:46.551  		--rc genhtml_function_coverage=1
00:04:46.551  		--rc genhtml_legend=1
00:04:46.551  		--rc geninfo_all_blocks=1
00:04:46.551  		--rc geninfo_unexecuted_blocks=1
00:04:46.551  		
00:04:46.551  		'
00:04:46.551    06:15:03	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:04:46.551  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:46.551  		--rc genhtml_branch_coverage=1
00:04:46.551  		--rc genhtml_function_coverage=1
00:04:46.551  		--rc genhtml_legend=1
00:04:46.551  		--rc geninfo_all_blocks=1
00:04:46.551  		--rc geninfo_unexecuted_blocks=1
00:04:46.551  		
00:04:46.551  		'
00:04:46.551    06:15:03	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:04:46.551  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:46.551  		--rc genhtml_branch_coverage=1
00:04:46.551  		--rc genhtml_function_coverage=1
00:04:46.551  		--rc genhtml_legend=1
00:04:46.551  		--rc geninfo_all_blocks=1
00:04:46.551  		--rc geninfo_unexecuted_blocks=1
00:04:46.551  		
00:04:46.551  		'
00:04:46.551   06:15:03	-- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py
00:04:46.551   06:15:03	-- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=56451
00:04:46.551   06:15:03	-- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 56451
00:04:46.551   06:15:03	-- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:04:46.551   06:15:03	-- common/autotest_common.sh@829 -- # '[' -z 56451 ']'
00:04:46.551   06:15:03	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:46.551   06:15:03	-- common/autotest_common.sh@834 -- # local max_retries=100
00:04:46.551  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:46.551   06:15:03	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:46.551   06:15:03	-- common/autotest_common.sh@838 -- # xtrace_disable
00:04:46.551   06:15:03	-- common/autotest_common.sh@10 -- # set +x
00:04:46.551  [2024-12-16 06:15:03.478931] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:04:46.551  [2024-12-16 06:15:03.479029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56451 ]
00:04:46.810  [2024-12-16 06:15:03.617514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:46.810  [2024-12-16 06:15:03.696710] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:04:46.810  [2024-12-16 06:15:03.696908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:04:47.748   06:15:04	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:04:47.748   06:15:04	-- common/autotest_common.sh@862 -- # return 0
00:04:47.748   06:15:04	-- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT
00:04:47.748   06:15:04	-- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats
00:04:47.748   06:15:04	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:47.748   06:15:04	-- common/autotest_common.sh@10 -- # set +x
00:04:47.748  {
00:04:47.748  "filename": "/tmp/spdk_mem_dump.txt"
00:04:47.748  }
00:04:47.748   06:15:04	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:47.748   06:15:04	-- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py
00:04:47.748  DPDK memory size 814.000000 MiB in 1 heap(s)
00:04:47.748  1 heaps totaling size 814.000000 MiB
00:04:47.748    size:  814.000000 MiB heap id: 0
00:04:47.748  end heaps----------
00:04:47.748  8 mempools totaling size 598.116089 MiB
00:04:47.748    size:  212.674988 MiB name: PDU_immediate_data_Pool
00:04:47.748    size:  158.602051 MiB name: PDU_data_out_Pool
00:04:47.748    size:   84.521057 MiB name: bdev_io_56451
00:04:47.748    size:   51.011292 MiB name: evtpool_56451
00:04:47.748    size:   50.003479 MiB name: msgpool_56451
00:04:47.748    size:   21.763794 MiB name: PDU_Pool
00:04:47.748    size:   19.513306 MiB name: SCSI_TASK_Pool
00:04:47.748    size:    0.026123 MiB name: Session_Pool
00:04:47.748  end mempools-------
00:04:47.748  6 memzones totaling size 4.142822 MiB
00:04:47.748    size:    1.000366 MiB name: RG_ring_0_56451
00:04:47.748    size:    1.000366 MiB name: RG_ring_1_56451
00:04:47.748    size:    1.000366 MiB name: RG_ring_4_56451
00:04:47.748    size:    1.000366 MiB name: RG_ring_5_56451
00:04:47.748    size:    0.125366 MiB name: RG_ring_2_56451
00:04:47.748    size:    0.015991 MiB name: RG_ring_3_56451
00:04:47.748  end memzones-------
00:04:47.748   06:15:04	-- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0
00:04:47.748  heap id: 0 total size: 814.000000 MiB number of busy elements: 213 number of free elements: 15
00:04:47.748    list of free elements. size: 12.487854 MiB
00:04:47.748      element at address: 0x200000400000 with size:    1.999512 MiB
00:04:47.748      element at address: 0x200018e00000 with size:    0.999878 MiB
00:04:47.748      element at address: 0x200019000000 with size:    0.999878 MiB
00:04:47.748      element at address: 0x200003e00000 with size:    0.996277 MiB
00:04:47.748      element at address: 0x200031c00000 with size:    0.994446 MiB
00:04:47.748      element at address: 0x200013800000 with size:    0.978699 MiB
00:04:47.748      element at address: 0x200007000000 with size:    0.959839 MiB
00:04:47.748      element at address: 0x200019200000 with size:    0.936584 MiB
00:04:47.748      element at address: 0x200000200000 with size:    0.837219 MiB
00:04:47.748      element at address: 0x20001aa00000 with size:    0.572632 MiB
00:04:47.748      element at address: 0x20000b200000 with size:    0.489990 MiB
00:04:47.748      element at address: 0x200000800000 with size:    0.487061 MiB
00:04:47.748      element at address: 0x200019400000 with size:    0.485657 MiB
00:04:47.748      element at address: 0x200027e00000 with size:    0.398499 MiB
00:04:47.748      element at address: 0x200003a00000 with size:    0.351685 MiB
00:04:47.748    list of standard malloc elements. size: 199.249573 MiB
00:04:47.748      element at address: 0x20000b3fff80 with size:  132.000122 MiB
00:04:47.748      element at address: 0x2000071fff80 with size:   64.000122 MiB
00:04:47.748      element at address: 0x200018efff80 with size:    1.000122 MiB
00:04:47.748      element at address: 0x2000190fff80 with size:    1.000122 MiB
00:04:47.748      element at address: 0x2000192fff80 with size:    1.000122 MiB
00:04:47.748      element at address: 0x2000003d9f00 with size:    0.140747 MiB
00:04:47.748      element at address: 0x2000192eff00 with size:    0.062622 MiB
00:04:47.748      element at address: 0x2000003fdf80 with size:    0.007935 MiB
00:04:47.748      element at address: 0x2000192efdc0 with size:    0.000305 MiB
00:04:47.748      element at address: 0x2000002d6540 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d6600 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d66c0 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d6780 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d6840 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d6900 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d69c0 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d6a80 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d6b40 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d6c00 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d6cc0 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d6d80 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d6e40 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d6f00 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d6fc0 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d71c0 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d7280 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d7340 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d7400 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d74c0 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d7580 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d7640 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d7700 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d77c0 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d7880 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d7940 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d7a00 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d7ac0 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d7b80 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000002d7c40 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000003d9e40 with size:    0.000183 MiB
00:04:47.748      element at address: 0x20000087cb00 with size:    0.000183 MiB
00:04:47.748      element at address: 0x20000087cbc0 with size:    0.000183 MiB
00:04:47.748      element at address: 0x20000087cc80 with size:    0.000183 MiB
00:04:47.748      element at address: 0x20000087cd40 with size:    0.000183 MiB
00:04:47.748      element at address: 0x20000087ce00 with size:    0.000183 MiB
00:04:47.748      element at address: 0x20000087cec0 with size:    0.000183 MiB
00:04:47.748      element at address: 0x2000008fd180 with size:    0.000183 MiB
00:04:47.748      element at address: 0x200003a5a080 with size:    0.000183 MiB
00:04:47.748      element at address: 0x200003a5a140 with size:    0.000183 MiB
00:04:47.748      element at address: 0x200003a5a200 with size:    0.000183 MiB
00:04:47.748      element at address: 0x200003a5a2c0 with size:    0.000183 MiB
00:04:47.748      element at address: 0x200003a5a380 with size:    0.000183 MiB
00:04:47.748      element at address: 0x200003a5a440 with size:    0.000183 MiB
00:04:47.748      element at address: 0x200003a5a500 with size:    0.000183 MiB
00:04:47.748      element at address: 0x200003a5a5c0 with size:    0.000183 MiB
00:04:47.748      element at address: 0x200003a5a680 with size:    0.000183 MiB
00:04:47.748      element at address: 0x200003a5a740 with size:    0.000183 MiB
00:04:47.748      element at address: 0x200003a5a800 with size:    0.000183 MiB
00:04:47.748      element at address: 0x200003a5a8c0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200003a5a980 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200003a5aa40 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200003a5ab00 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200003a5abc0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200003a5ac80 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200003a5ad40 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200003a5ae00 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200003a5aec0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200003a5af80 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200003a5b040 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200003adb300 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200003adb500 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200003adf7c0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200003affa80 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200003affb40 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200003eff0c0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x2000070fdd80 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20000b27d700 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20000b27d7c0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20000b27d880 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20000b27d940 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20000b27da00 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20000b27dac0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20000b2fdd80 with size:    0.000183 MiB
00:04:47.749      element at address: 0x2000138fa8c0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x2000192efc40 with size:    0.000183 MiB
00:04:47.749      element at address: 0x2000192efd00 with size:    0.000183 MiB
00:04:47.749      element at address: 0x2000194bc740 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa92980 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa92a40 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa92b00 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa92bc0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa92c80 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa92d40 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa92e00 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa92ec0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa92f80 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa93040 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa93100 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa931c0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa93280 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa93340 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa93400 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa934c0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa93580 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa93640 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa93700 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa937c0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa93880 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa93940 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa93a00 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa93ac0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa93b80 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa93c40 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa93d00 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa93dc0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa93e80 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa93f40 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa94000 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa940c0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa94180 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa94240 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa94300 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa943c0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa94480 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa94540 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa94600 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa946c0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa94780 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa94840 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa94900 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa949c0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa94a80 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa94b40 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa94c00 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa94cc0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa94d80 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa94e40 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa94f00 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa94fc0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa95080 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa95140 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa95200 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa952c0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa95380 with size:    0.000183 MiB
00:04:47.749      element at address: 0x20001aa95440 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e66040 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e66100 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6cd00 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6cf00 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6cfc0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6d080 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6d140 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6d200 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6d2c0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6d380 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6d440 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6d500 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6d5c0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6d680 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6d740 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6d800 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6d8c0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6d980 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6da40 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6db00 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6dbc0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6dc80 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6dd40 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6de00 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6dec0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6df80 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6e040 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6e100 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6e1c0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6e280 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6e340 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6e400 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6e4c0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6e580 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6e640 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6e700 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6e7c0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6e880 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6e940 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6ea00 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6eac0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6eb80 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6ec40 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6ed00 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6edc0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6ee80 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6ef40 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6f000 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6f0c0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6f180 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6f240 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6f300 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6f3c0 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6f480 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6f540 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6f600 with size:    0.000183 MiB
00:04:47.749      element at address: 0x200027e6f6c0 with size:    0.000183 MiB
00:04:47.750      element at address: 0x200027e6f780 with size:    0.000183 MiB
00:04:47.750      element at address: 0x200027e6f840 with size:    0.000183 MiB
00:04:47.750      element at address: 0x200027e6f900 with size:    0.000183 MiB
00:04:47.750      element at address: 0x200027e6f9c0 with size:    0.000183 MiB
00:04:47.750      element at address: 0x200027e6fa80 with size:    0.000183 MiB
00:04:47.750      element at address: 0x200027e6fb40 with size:    0.000183 MiB
00:04:47.750      element at address: 0x200027e6fc00 with size:    0.000183 MiB
00:04:47.750      element at address: 0x200027e6fcc0 with size:    0.000183 MiB
00:04:47.750      element at address: 0x200027e6fd80 with size:    0.000183 MiB
00:04:47.750      element at address: 0x200027e6fe40 with size:    0.000183 MiB
00:04:47.750      element at address: 0x200027e6ff00 with size:    0.000183 MiB
00:04:47.750    list of memzone associated elements. size: 602.262573 MiB
00:04:47.750      element at address: 0x20001aa95500 with size:  211.416748 MiB
00:04:47.750        associated memzone info: size:  211.416626 MiB name: MP_PDU_immediate_data_Pool_0
00:04:47.750      element at address: 0x200027e6ffc0 with size:  157.562561 MiB
00:04:47.750        associated memzone info: size:  157.562439 MiB name: MP_PDU_data_out_Pool_0
00:04:47.750      element at address: 0x2000139fab80 with size:   84.020630 MiB
00:04:47.750        associated memzone info: size:   84.020508 MiB name: MP_bdev_io_56451_0
00:04:47.750      element at address: 0x2000009ff380 with size:   48.003052 MiB
00:04:47.750        associated memzone info: size:   48.002930 MiB name: MP_evtpool_56451_0
00:04:47.750      element at address: 0x200003fff380 with size:   48.003052 MiB
00:04:47.750        associated memzone info: size:   48.002930 MiB name: MP_msgpool_56451_0
00:04:47.750      element at address: 0x2000195be940 with size:   20.255554 MiB
00:04:47.750        associated memzone info: size:   20.255432 MiB name: MP_PDU_Pool_0
00:04:47.750      element at address: 0x200031dfeb40 with size:   18.005066 MiB
00:04:47.750        associated memzone info: size:   18.004944 MiB name: MP_SCSI_TASK_Pool_0
00:04:47.750      element at address: 0x2000005ffe00 with size:    2.000488 MiB
00:04:47.750        associated memzone info: size:    2.000366 MiB name: RG_MP_evtpool_56451
00:04:47.750      element at address: 0x200003bffe00 with size:    2.000488 MiB
00:04:47.750        associated memzone info: size:    2.000366 MiB name: RG_MP_msgpool_56451
00:04:47.750      element at address: 0x2000002d7d00 with size:    1.008118 MiB
00:04:47.750        associated memzone info: size:    1.007996 MiB name: MP_evtpool_56451
00:04:47.750      element at address: 0x20000b2fde40 with size:    1.008118 MiB
00:04:47.750        associated memzone info: size:    1.007996 MiB name: MP_PDU_Pool
00:04:47.750      element at address: 0x2000194bc800 with size:    1.008118 MiB
00:04:47.750        associated memzone info: size:    1.007996 MiB name: MP_PDU_immediate_data_Pool
00:04:47.750      element at address: 0x2000070fde40 with size:    1.008118 MiB
00:04:47.750        associated memzone info: size:    1.007996 MiB name: MP_PDU_data_out_Pool
00:04:47.750      element at address: 0x2000008fd240 with size:    1.008118 MiB
00:04:47.750        associated memzone info: size:    1.007996 MiB name: MP_SCSI_TASK_Pool
00:04:47.750      element at address: 0x200003eff180 with size:    1.000488 MiB
00:04:47.750        associated memzone info: size:    1.000366 MiB name: RG_ring_0_56451
00:04:47.750      element at address: 0x200003affc00 with size:    1.000488 MiB
00:04:47.750        associated memzone info: size:    1.000366 MiB name: RG_ring_1_56451
00:04:47.750      element at address: 0x2000138fa980 with size:    1.000488 MiB
00:04:47.750        associated memzone info: size:    1.000366 MiB name: RG_ring_4_56451
00:04:47.750      element at address: 0x200031cfe940 with size:    1.000488 MiB
00:04:47.750        associated memzone info: size:    1.000366 MiB name: RG_ring_5_56451
00:04:47.750      element at address: 0x200003a5b100 with size:    0.500488 MiB
00:04:47.750        associated memzone info: size:    0.500366 MiB name: RG_MP_bdev_io_56451
00:04:47.750      element at address: 0x20000b27db80 with size:    0.500488 MiB
00:04:47.750        associated memzone info: size:    0.500366 MiB name: RG_MP_PDU_Pool
00:04:47.750      element at address: 0x20000087cf80 with size:    0.500488 MiB
00:04:47.750        associated memzone info: size:    0.500366 MiB name: RG_MP_SCSI_TASK_Pool
00:04:47.750      element at address: 0x20001947c540 with size:    0.250488 MiB
00:04:47.750        associated memzone info: size:    0.250366 MiB name: RG_MP_PDU_immediate_data_Pool
00:04:47.750      element at address: 0x200003adf880 with size:    0.125488 MiB
00:04:47.750        associated memzone info: size:    0.125366 MiB name: RG_ring_2_56451
00:04:47.750      element at address: 0x2000070f5b80 with size:    0.031738 MiB
00:04:47.750        associated memzone info: size:    0.031616 MiB name: RG_MP_PDU_data_out_Pool
00:04:47.750      element at address: 0x200027e661c0 with size:    0.023743 MiB
00:04:47.750        associated memzone info: size:    0.023621 MiB name: MP_Session_Pool_0
00:04:47.750      element at address: 0x200003adb5c0 with size:    0.016113 MiB
00:04:47.750        associated memzone info: size:    0.015991 MiB name: RG_ring_3_56451
00:04:47.750      element at address: 0x200027e6c300 with size:    0.002441 MiB
00:04:47.750        associated memzone info: size:    0.002319 MiB name: RG_MP_Session_Pool
00:04:47.750      element at address: 0x2000002d7080 with size:    0.000305 MiB
00:04:47.750        associated memzone info: size:    0.000183 MiB name: MP_msgpool_56451
00:04:47.750      element at address: 0x200003adb3c0 with size:    0.000305 MiB
00:04:47.750        associated memzone info: size:    0.000183 MiB name: MP_bdev_io_56451
00:04:47.750      element at address: 0x200027e6cdc0 with size:    0.000305 MiB
00:04:47.750        associated memzone info: size:    0.000183 MiB name: MP_Session_Pool
00:04:47.750   06:15:04	-- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT
00:04:47.750   06:15:04	-- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 56451
00:04:47.750   06:15:04	-- common/autotest_common.sh@936 -- # '[' -z 56451 ']'
00:04:47.750   06:15:04	-- common/autotest_common.sh@940 -- # kill -0 56451
00:04:47.750    06:15:04	-- common/autotest_common.sh@941 -- # uname
00:04:47.750   06:15:04	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:04:47.750    06:15:04	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56451
00:04:47.750   06:15:04	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:04:47.750   06:15:04	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:04:47.750  killing process with pid 56451
00:04:47.750   06:15:04	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 56451'
00:04:47.750   06:15:04	-- common/autotest_common.sh@955 -- # kill 56451
00:04:47.750   06:15:04	-- common/autotest_common.sh@960 -- # wait 56451
00:04:48.318  
00:04:48.319  real	0m1.767s
00:04:48.319  user	0m1.932s
00:04:48.319  sys	0m0.422s
00:04:48.319   06:15:05	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:48.319   06:15:05	-- common/autotest_common.sh@10 -- # set +x
00:04:48.319  ************************************
00:04:48.319  END TEST dpdk_mem_utility
00:04:48.319  ************************************
00:04:48.319   06:15:05	-- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh
00:04:48.319   06:15:05	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:48.319   06:15:05	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:48.319   06:15:05	-- common/autotest_common.sh@10 -- # set +x
00:04:48.319  ************************************
00:04:48.319  START TEST event
00:04:48.319  ************************************
00:04:48.319   06:15:05	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh
00:04:48.319  * Looking for test storage...
00:04:48.319  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event
00:04:48.319    06:15:05	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:04:48.319     06:15:05	-- common/autotest_common.sh@1690 -- # lcov --version
00:04:48.319     06:15:05	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:04:48.319    06:15:05	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:04:48.319    06:15:05	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:04:48.319    06:15:05	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:04:48.319    06:15:05	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:04:48.319    06:15:05	-- scripts/common.sh@335 -- # IFS=.-:
00:04:48.319    06:15:05	-- scripts/common.sh@335 -- # read -ra ver1
00:04:48.319    06:15:05	-- scripts/common.sh@336 -- # IFS=.-:
00:04:48.319    06:15:05	-- scripts/common.sh@336 -- # read -ra ver2
00:04:48.319    06:15:05	-- scripts/common.sh@337 -- # local 'op=<'
00:04:48.319    06:15:05	-- scripts/common.sh@339 -- # ver1_l=2
00:04:48.319    06:15:05	-- scripts/common.sh@340 -- # ver2_l=1
00:04:48.319    06:15:05	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:04:48.319    06:15:05	-- scripts/common.sh@343 -- # case "$op" in
00:04:48.319    06:15:05	-- scripts/common.sh@344 -- # : 1
00:04:48.319    06:15:05	-- scripts/common.sh@363 -- # (( v = 0 ))
00:04:48.319    06:15:05	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:48.319     06:15:05	-- scripts/common.sh@364 -- # decimal 1
00:04:48.319     06:15:05	-- scripts/common.sh@352 -- # local d=1
00:04:48.319     06:15:05	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:48.319     06:15:05	-- scripts/common.sh@354 -- # echo 1
00:04:48.319    06:15:05	-- scripts/common.sh@364 -- # ver1[v]=1
00:04:48.319     06:15:05	-- scripts/common.sh@365 -- # decimal 2
00:04:48.319     06:15:05	-- scripts/common.sh@352 -- # local d=2
00:04:48.319     06:15:05	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:48.319     06:15:05	-- scripts/common.sh@354 -- # echo 2
00:04:48.319    06:15:05	-- scripts/common.sh@365 -- # ver2[v]=2
00:04:48.319    06:15:05	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:04:48.319    06:15:05	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:04:48.319    06:15:05	-- scripts/common.sh@367 -- # return 0
00:04:48.319    06:15:05	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:48.319    06:15:05	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:04:48.319  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:48.319  		--rc genhtml_branch_coverage=1
00:04:48.319  		--rc genhtml_function_coverage=1
00:04:48.319  		--rc genhtml_legend=1
00:04:48.319  		--rc geninfo_all_blocks=1
00:04:48.319  		--rc geninfo_unexecuted_blocks=1
00:04:48.319  		
00:04:48.319  		'
00:04:48.319    06:15:05	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:04:48.319  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:48.319  		--rc genhtml_branch_coverage=1
00:04:48.319  		--rc genhtml_function_coverage=1
00:04:48.319  		--rc genhtml_legend=1
00:04:48.319  		--rc geninfo_all_blocks=1
00:04:48.319  		--rc geninfo_unexecuted_blocks=1
00:04:48.319  		
00:04:48.319  		'
00:04:48.319    06:15:05	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:04:48.319  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:48.319  		--rc genhtml_branch_coverage=1
00:04:48.319  		--rc genhtml_function_coverage=1
00:04:48.319  		--rc genhtml_legend=1
00:04:48.319  		--rc geninfo_all_blocks=1
00:04:48.319  		--rc geninfo_unexecuted_blocks=1
00:04:48.319  		
00:04:48.319  		'
00:04:48.319    06:15:05	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:04:48.319  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:48.319  		--rc genhtml_branch_coverage=1
00:04:48.319  		--rc genhtml_function_coverage=1
00:04:48.319  		--rc genhtml_legend=1
00:04:48.319  		--rc geninfo_all_blocks=1
00:04:48.319  		--rc geninfo_unexecuted_blocks=1
00:04:48.319  		
00:04:48.319  		'
00:04:48.320   06:15:05	-- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:04:48.320    06:15:05	-- bdev/nbd_common.sh@6 -- # set -e
00:04:48.320   06:15:05	-- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:04:48.320   06:15:05	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:04:48.320   06:15:05	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:48.320   06:15:05	-- common/autotest_common.sh@10 -- # set +x
00:04:48.320  ************************************
00:04:48.320  START TEST event_perf
00:04:48.320  ************************************
00:04:48.320   06:15:05	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:04:48.320  Running I/O for 1 seconds...[2024-12-16 06:15:05.264152] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:04:48.320  [2024-12-16 06:15:05.264631] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56553 ]
00:04:48.580  [2024-12-16 06:15:05.401076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:04:48.580  [2024-12-16 06:15:05.471070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:04:48.580  [2024-12-16 06:15:05.471158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:04:48.580  [2024-12-16 06:15:05.471307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:04:48.580  [2024-12-16 06:15:05.471310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:04:49.956  Running I/O for 1 seconds...
00:04:49.956  lcore  0:   209329
00:04:49.956  lcore  1:   209329
00:04:49.956  lcore  2:   209329
00:04:49.956  lcore  3:   209330
00:04:49.956  done.
00:04:49.956  
00:04:49.956  real	0m1.310s
00:04:49.956  user	0m4.128s
00:04:49.956  sys	0m0.062s
00:04:49.956   06:15:06	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:49.956  ************************************
00:04:49.956  END TEST event_perf
00:04:49.956  ************************************
00:04:49.956   06:15:06	-- common/autotest_common.sh@10 -- # set +x
00:04:49.956   06:15:06	-- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1
00:04:49.956   06:15:06	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:04:49.956   06:15:06	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:49.956   06:15:06	-- common/autotest_common.sh@10 -- # set +x
00:04:49.956  ************************************
00:04:49.956  START TEST event_reactor
00:04:49.956  ************************************
00:04:49.956   06:15:06	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1
00:04:49.956  [2024-12-16 06:15:06.633029] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:04:49.956  [2024-12-16 06:15:06.633126] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56586 ]
00:04:49.956  [2024-12-16 06:15:06.768387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:49.956  [2024-12-16 06:15:06.836889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:04:51.360  test_start
00:04:51.360  oneshot
00:04:51.360  tick 100
00:04:51.360  tick 100
00:04:51.360  tick 250
00:04:51.360  tick 100
00:04:51.360  tick 100
00:04:51.360  tick 100
00:04:51.360  tick 250
00:04:51.360  tick 500
00:04:51.360  tick 100
00:04:51.360  tick 100
00:04:51.360  tick 250
00:04:51.360  tick 100
00:04:51.360  tick 100
00:04:51.360  test_end
00:04:51.360  ************************************
00:04:51.360  END TEST event_reactor
00:04:51.360  ************************************
00:04:51.360  
00:04:51.360  real	0m1.304s
00:04:51.360  user	0m1.144s
00:04:51.360  sys	0m0.054s
00:04:51.360   06:15:07	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:51.360   06:15:07	-- common/autotest_common.sh@10 -- # set +x
00:04:51.360   06:15:07	-- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1
00:04:51.360   06:15:07	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:04:51.360   06:15:07	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:51.360   06:15:07	-- common/autotest_common.sh@10 -- # set +x
00:04:51.360  ************************************
00:04:51.360  START TEST event_reactor_perf
00:04:51.360  ************************************
00:04:51.360   06:15:07	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1
00:04:51.360  [2024-12-16 06:15:07.986822] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:04:51.361  [2024-12-16 06:15:07.986923] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56616 ]
00:04:51.361  [2024-12-16 06:15:08.123128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:51.361  [2024-12-16 06:15:08.187272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:04:52.311  test_start
00:04:52.311  test_end
00:04:52.312  Performance:   458306 events per second
00:04:52.312  
00:04:52.312  real	0m1.307s
00:04:52.312  user	0m1.152s
00:04:52.312  sys	0m0.050s
00:04:52.312   06:15:09	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:52.312   06:15:09	-- common/autotest_common.sh@10 -- # set +x
00:04:52.312  ************************************
00:04:52.312  END TEST event_reactor_perf
00:04:52.312  ************************************
00:04:52.571    06:15:09	-- event/event.sh@49 -- # uname -s
00:04:52.571   06:15:09	-- event/event.sh@49 -- # '[' Linux = Linux ']'
00:04:52.571   06:15:09	-- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh
00:04:52.571   06:15:09	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:52.571   06:15:09	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:52.571   06:15:09	-- common/autotest_common.sh@10 -- # set +x
00:04:52.571  ************************************
00:04:52.571  START TEST event_scheduler
00:04:52.571  ************************************
00:04:52.571   06:15:09	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh
00:04:52.571  * Looking for test storage...
00:04:52.571  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler
00:04:52.571    06:15:09	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:04:52.571     06:15:09	-- common/autotest_common.sh@1690 -- # lcov --version
00:04:52.571     06:15:09	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:04:52.571    06:15:09	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:04:52.571    06:15:09	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:04:52.571    06:15:09	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:04:52.571    06:15:09	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:04:52.571    06:15:09	-- scripts/common.sh@335 -- # IFS=.-:
00:04:52.571    06:15:09	-- scripts/common.sh@335 -- # read -ra ver1
00:04:52.571    06:15:09	-- scripts/common.sh@336 -- # IFS=.-:
00:04:52.571    06:15:09	-- scripts/common.sh@336 -- # read -ra ver2
00:04:52.571    06:15:09	-- scripts/common.sh@337 -- # local 'op=<'
00:04:52.571    06:15:09	-- scripts/common.sh@339 -- # ver1_l=2
00:04:52.571    06:15:09	-- scripts/common.sh@340 -- # ver2_l=1
00:04:52.571    06:15:09	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:04:52.571    06:15:09	-- scripts/common.sh@343 -- # case "$op" in
00:04:52.571    06:15:09	-- scripts/common.sh@344 -- # : 1
00:04:52.571    06:15:09	-- scripts/common.sh@363 -- # (( v = 0 ))
00:04:52.571    06:15:09	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:52.571     06:15:09	-- scripts/common.sh@364 -- # decimal 1
00:04:52.571     06:15:09	-- scripts/common.sh@352 -- # local d=1
00:04:52.571     06:15:09	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:52.571     06:15:09	-- scripts/common.sh@354 -- # echo 1
00:04:52.571    06:15:09	-- scripts/common.sh@364 -- # ver1[v]=1
00:04:52.571     06:15:09	-- scripts/common.sh@365 -- # decimal 2
00:04:52.571     06:15:09	-- scripts/common.sh@352 -- # local d=2
00:04:52.571     06:15:09	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:52.571     06:15:09	-- scripts/common.sh@354 -- # echo 2
00:04:52.571    06:15:09	-- scripts/common.sh@365 -- # ver2[v]=2
00:04:52.572    06:15:09	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:04:52.572    06:15:09	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:04:52.572    06:15:09	-- scripts/common.sh@367 -- # return 0
00:04:52.572    06:15:09	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:52.572    06:15:09	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:04:52.572  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:52.572  		--rc genhtml_branch_coverage=1
00:04:52.572  		--rc genhtml_function_coverage=1
00:04:52.572  		--rc genhtml_legend=1
00:04:52.572  		--rc geninfo_all_blocks=1
00:04:52.572  		--rc geninfo_unexecuted_blocks=1
00:04:52.572  		
00:04:52.572  		'
00:04:52.572    06:15:09	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:04:52.572  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:52.572  		--rc genhtml_branch_coverage=1
00:04:52.572  		--rc genhtml_function_coverage=1
00:04:52.572  		--rc genhtml_legend=1
00:04:52.572  		--rc geninfo_all_blocks=1
00:04:52.572  		--rc geninfo_unexecuted_blocks=1
00:04:52.572  		
00:04:52.572  		'
00:04:52.572    06:15:09	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:04:52.572  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:52.572  		--rc genhtml_branch_coverage=1
00:04:52.572  		--rc genhtml_function_coverage=1
00:04:52.572  		--rc genhtml_legend=1
00:04:52.572  		--rc geninfo_all_blocks=1
00:04:52.572  		--rc geninfo_unexecuted_blocks=1
00:04:52.572  		
00:04:52.572  		'
00:04:52.572    06:15:09	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:04:52.572  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:52.572  		--rc genhtml_branch_coverage=1
00:04:52.572  		--rc genhtml_function_coverage=1
00:04:52.572  		--rc genhtml_legend=1
00:04:52.572  		--rc geninfo_all_blocks=1
00:04:52.572  		--rc geninfo_unexecuted_blocks=1
00:04:52.572  		
00:04:52.572  		'
00:04:52.572   06:15:09	-- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd
00:04:52.572   06:15:09	-- scheduler/scheduler.sh@35 -- # scheduler_pid=56691
00:04:52.572   06:15:09	-- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT
00:04:52.572   06:15:09	-- scheduler/scheduler.sh@37 -- # waitforlisten 56691
00:04:52.572   06:15:09	-- common/autotest_common.sh@829 -- # '[' -z 56691 ']'
00:04:52.572   06:15:09	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:52.572   06:15:09	-- common/autotest_common.sh@834 -- # local max_retries=100
00:04:52.572  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:52.572   06:15:09	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:52.572   06:15:09	-- common/autotest_common.sh@838 -- # xtrace_disable
00:04:52.572   06:15:09	-- common/autotest_common.sh@10 -- # set +x
00:04:52.572   06:15:09	-- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f
00:04:52.832  [2024-12-16 06:15:09.560903] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:04:52.832  [2024-12-16 06:15:09.561000] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56691 ]
00:04:52.832  [2024-12-16 06:15:09.700664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:04:52.832  [2024-12-16 06:15:09.803579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:04:52.832  [2024-12-16 06:15:09.803746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:04:52.832  [2024-12-16 06:15:09.803856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:04:52.832  [2024-12-16 06:15:09.803863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:04:53.771   06:15:10	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:04:53.771   06:15:10	-- common/autotest_common.sh@862 -- # return 0
00:04:53.771   06:15:10	-- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic
00:04:53.771   06:15:10	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:53.771   06:15:10	-- common/autotest_common.sh@10 -- # set +x
00:04:53.771  POWER: Env isn't set yet!
00:04:53.771  POWER: Attempting to initialise ACPI cpufreq power management...
00:04:53.771  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:04:53.771  POWER: Cannot set governor of lcore 0 to userspace
00:04:53.771  POWER: Attempting to initialise PSTAT power management...
00:04:53.771  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:04:53.771  POWER: Cannot set governor of lcore 0 to performance
00:04:53.771  POWER: Attempting to initialise AMD PSTATE power management...
00:04:53.771  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:04:53.771  POWER: Cannot set governor of lcore 0 to userspace
00:04:53.771  POWER: Attempting to initialise CPPC power management...
00:04:53.771  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:04:53.771  POWER: Cannot set governor of lcore 0 to userspace
00:04:53.771  POWER: Attempting to initialise VM power management...
00:04:53.771  GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory
00:04:53.771  POWER: Unable to set Power Management Environment for lcore 0
00:04:53.771  [2024-12-16 06:15:10.589515] dpdk_governor.c:  88:_init_core: *ERROR*: Failed to initialize on core0
00:04:53.771  [2024-12-16 06:15:10.589557] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0
00:04:53.771  [2024-12-16 06:15:10.589567] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor
00:04:53.771  [2024-12-16 06:15:10.589579] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:04:53.771  [2024-12-16 06:15:10.589586] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:04:53.771  [2024-12-16 06:15:10.589594] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:04:53.771   06:15:10	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:53.771   06:15:10	-- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init
00:04:53.771   06:15:10	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:53.771   06:15:10	-- common/autotest_common.sh@10 -- # set +x
00:04:53.771  [2024-12-16 06:15:10.679242] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started.
00:04:53.771   06:15:10	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:53.771   06:15:10	-- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread
00:04:53.771   06:15:10	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:53.771   06:15:10	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:53.771   06:15:10	-- common/autotest_common.sh@10 -- # set +x
00:04:53.771  ************************************
00:04:53.771  START TEST scheduler_create_thread
00:04:53.771  ************************************
00:04:53.771   06:15:10	-- common/autotest_common.sh@1114 -- # scheduler_create_thread
00:04:53.771   06:15:10	-- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100
00:04:53.771   06:15:10	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:53.771   06:15:10	-- common/autotest_common.sh@10 -- # set +x
00:04:53.771  2
00:04:53.771   06:15:10	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:53.771   06:15:10	-- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100
00:04:53.771   06:15:10	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:53.771   06:15:10	-- common/autotest_common.sh@10 -- # set +x
00:04:53.771  3
00:04:53.771   06:15:10	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:53.771   06:15:10	-- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100
00:04:53.771   06:15:10	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:53.771   06:15:10	-- common/autotest_common.sh@10 -- # set +x
00:04:53.771  4
00:04:53.771   06:15:10	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:53.771   06:15:10	-- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100
00:04:53.771   06:15:10	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:53.772   06:15:10	-- common/autotest_common.sh@10 -- # set +x
00:04:53.772  5
00:04:53.772   06:15:10	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:53.772   06:15:10	-- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0
00:04:53.772   06:15:10	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:53.772   06:15:10	-- common/autotest_common.sh@10 -- # set +x
00:04:53.772  6
00:04:53.772   06:15:10	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:53.772   06:15:10	-- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0
00:04:53.772   06:15:10	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:53.772   06:15:10	-- common/autotest_common.sh@10 -- # set +x
00:04:53.772  7
00:04:53.772   06:15:10	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:53.772   06:15:10	-- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0
00:04:53.772   06:15:10	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:53.772   06:15:10	-- common/autotest_common.sh@10 -- # set +x
00:04:54.031  8
00:04:54.031   06:15:10	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:54.031   06:15:10	-- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0
00:04:54.031   06:15:10	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:54.031   06:15:10	-- common/autotest_common.sh@10 -- # set +x
00:04:54.031  9
00:04:54.031   06:15:10	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:54.031   06:15:10	-- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30
00:04:54.031   06:15:10	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:54.031   06:15:10	-- common/autotest_common.sh@10 -- # set +x
00:04:54.031  10
00:04:54.031   06:15:10	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:54.031    06:15:10	-- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0
00:04:54.031    06:15:10	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:54.031    06:15:10	-- common/autotest_common.sh@10 -- # set +x
00:04:54.031    06:15:10	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:54.031   06:15:10	-- scheduler/scheduler.sh@22 -- # thread_id=11
00:04:54.031   06:15:10	-- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50
00:04:54.031   06:15:10	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:54.031   06:15:10	-- common/autotest_common.sh@10 -- # set +x
00:04:54.031   06:15:10	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:54.031    06:15:10	-- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100
00:04:54.031    06:15:10	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:54.031    06:15:10	-- common/autotest_common.sh@10 -- # set +x
00:04:55.409    06:15:12	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:55.409   06:15:12	-- scheduler/scheduler.sh@25 -- # thread_id=12
00:04:55.409   06:15:12	-- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12
00:04:55.409   06:15:12	-- common/autotest_common.sh@561 -- # xtrace_disable
00:04:55.409   06:15:12	-- common/autotest_common.sh@10 -- # set +x
00:04:56.345   06:15:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:04:56.345  
00:04:56.345  real	0m2.613s
00:04:56.345  user	0m0.016s
00:04:56.345  sys	0m0.009s
00:04:56.345  ************************************
00:04:56.345  END TEST scheduler_create_thread
00:04:56.345  ************************************
00:04:56.345   06:15:13	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:56.345   06:15:13	-- common/autotest_common.sh@10 -- # set +x
00:04:56.606   06:15:13	-- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:04:56.607   06:15:13	-- scheduler/scheduler.sh@46 -- # killprocess 56691
00:04:56.607   06:15:13	-- common/autotest_common.sh@936 -- # '[' -z 56691 ']'
00:04:56.607   06:15:13	-- common/autotest_common.sh@940 -- # kill -0 56691
00:04:56.607    06:15:13	-- common/autotest_common.sh@941 -- # uname
00:04:56.607   06:15:13	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:04:56.607    06:15:13	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56691
00:04:56.607  killing process with pid 56691
00:04:56.607   06:15:13	-- common/autotest_common.sh@942 -- # process_name=reactor_2
00:04:56.607   06:15:13	-- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']'
00:04:56.607   06:15:13	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 56691'
00:04:56.607   06:15:13	-- common/autotest_common.sh@955 -- # kill 56691
00:04:56.607   06:15:13	-- common/autotest_common.sh@960 -- # wait 56691
00:04:56.864  [2024-12-16 06:15:13.782629] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped.
00:04:57.123  
00:04:57.123  real	0m4.684s
00:04:57.123  user	0m8.925s
00:04:57.123  sys	0m0.378s
00:04:57.123   06:15:14	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:57.123  ************************************
00:04:57.123  END TEST event_scheduler
00:04:57.123  ************************************
00:04:57.123   06:15:14	-- common/autotest_common.sh@10 -- # set +x
00:04:57.123   06:15:14	-- event/event.sh@51 -- # modprobe -n nbd
00:04:57.123   06:15:14	-- event/event.sh@52 -- # run_test app_repeat app_repeat_test
00:04:57.123   06:15:14	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:57.123   06:15:14	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:57.123   06:15:14	-- common/autotest_common.sh@10 -- # set +x
00:04:57.123  ************************************
00:04:57.123  START TEST app_repeat
00:04:57.123  ************************************
00:04:57.123   06:15:14	-- common/autotest_common.sh@1114 -- # app_repeat_test
00:04:57.123   06:15:14	-- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:57.123   06:15:14	-- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:57.123   06:15:14	-- event/event.sh@13 -- # local nbd_list
00:04:57.123   06:15:14	-- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1')
00:04:57.123   06:15:14	-- event/event.sh@14 -- # local bdev_list
00:04:57.123   06:15:14	-- event/event.sh@15 -- # local repeat_times=4
00:04:57.123   06:15:14	-- event/event.sh@17 -- # modprobe nbd
00:04:57.123   06:15:14	-- event/event.sh@19 -- # repeat_pid=56803
00:04:57.123   06:15:14	-- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT
00:04:57.123   06:15:14	-- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4
00:04:57.123  Process app_repeat pid: 56803
00:04:57.123  spdk_app_start Round 0
00:04:57.123   06:15:14	-- event/event.sh@21 -- # echo 'Process app_repeat pid: 56803'
00:04:57.123   06:15:14	-- event/event.sh@23 -- # for i in {0..2}
00:04:57.123   06:15:14	-- event/event.sh@24 -- # echo 'spdk_app_start Round 0'
00:04:57.123   06:15:14	-- event/event.sh@25 -- # waitforlisten 56803 /var/tmp/spdk-nbd.sock
00:04:57.123   06:15:14	-- common/autotest_common.sh@829 -- # '[' -z 56803 ']'
00:04:57.123   06:15:14	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:04:57.123   06:15:14	-- common/autotest_common.sh@834 -- # local max_retries=100
00:04:57.123  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:04:57.123   06:15:14	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:04:57.123   06:15:14	-- common/autotest_common.sh@838 -- # xtrace_disable
00:04:57.123   06:15:14	-- common/autotest_common.sh@10 -- # set +x
00:04:57.382  [2024-12-16 06:15:14.104105] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:04:57.382  [2024-12-16 06:15:14.104344] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56803 ]
00:04:57.382  [2024-12-16 06:15:14.240576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:04:57.382  [2024-12-16 06:15:14.325675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:04:57.382  [2024-12-16 06:15:14.325682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:04:58.327   06:15:15	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:04:58.327   06:15:15	-- common/autotest_common.sh@862 -- # return 0
00:04:58.327   06:15:15	-- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:04:58.585  Malloc0
00:04:58.585   06:15:15	-- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:04:58.843  Malloc1
00:04:58.843   06:15:15	-- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:04:58.843   06:15:15	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:58.843   06:15:15	-- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:04:58.843   06:15:15	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:04:58.843   06:15:15	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:58.843   06:15:15	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:04:58.843   06:15:15	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:04:58.843   06:15:15	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:58.843   06:15:15	-- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:04:58.843   06:15:15	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:04:58.843   06:15:15	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:58.843   06:15:15	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:04:58.843   06:15:15	-- bdev/nbd_common.sh@12 -- # local i
00:04:58.843   06:15:15	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:04:58.843   06:15:15	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:58.843   06:15:15	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:04:59.102  /dev/nbd0
00:04:59.102    06:15:15	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:04:59.102   06:15:15	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:04:59.102   06:15:15	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:04:59.102   06:15:15	-- common/autotest_common.sh@867 -- # local i
00:04:59.102   06:15:15	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:04:59.102   06:15:15	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:04:59.102   06:15:15	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:04:59.102   06:15:15	-- common/autotest_common.sh@871 -- # break
00:04:59.102   06:15:15	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:04:59.102   06:15:15	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:04:59.102   06:15:15	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:04:59.102  1+0 records in
00:04:59.102  1+0 records out
00:04:59.102  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314392 s, 13.0 MB/s
00:04:59.102    06:15:15	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:04:59.103   06:15:15	-- common/autotest_common.sh@884 -- # size=4096
00:04:59.103   06:15:15	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:04:59.103   06:15:15	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:04:59.103   06:15:15	-- common/autotest_common.sh@887 -- # return 0
00:04:59.103   06:15:15	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:04:59.103   06:15:15	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:59.103   06:15:15	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:04:59.362  /dev/nbd1
00:04:59.362    06:15:16	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:04:59.362   06:15:16	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:04:59.362   06:15:16	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:04:59.362   06:15:16	-- common/autotest_common.sh@867 -- # local i
00:04:59.362   06:15:16	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:04:59.362   06:15:16	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:04:59.362   06:15:16	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:04:59.362   06:15:16	-- common/autotest_common.sh@871 -- # break
00:04:59.362   06:15:16	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:04:59.362   06:15:16	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:04:59.362   06:15:16	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:04:59.362  1+0 records in
00:04:59.362  1+0 records out
00:04:59.362  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254055 s, 16.1 MB/s
00:04:59.362    06:15:16	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:04:59.362   06:15:16	-- common/autotest_common.sh@884 -- # size=4096
00:04:59.362   06:15:16	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:04:59.362   06:15:16	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:04:59.362   06:15:16	-- common/autotest_common.sh@887 -- # return 0
00:04:59.362   06:15:16	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:04:59.362   06:15:16	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:59.362    06:15:16	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:04:59.362    06:15:16	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:59.362     06:15:16	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:04:59.621    06:15:16	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:04:59.621    {
00:04:59.621      "bdev_name": "Malloc0",
00:04:59.621      "nbd_device": "/dev/nbd0"
00:04:59.621    },
00:04:59.621    {
00:04:59.621      "bdev_name": "Malloc1",
00:04:59.621      "nbd_device": "/dev/nbd1"
00:04:59.621    }
00:04:59.621  ]'
00:04:59.621     06:15:16	-- bdev/nbd_common.sh@64 -- # echo '[
00:04:59.621    {
00:04:59.621      "bdev_name": "Malloc0",
00:04:59.621      "nbd_device": "/dev/nbd0"
00:04:59.621    },
00:04:59.621    {
00:04:59.621      "bdev_name": "Malloc1",
00:04:59.621      "nbd_device": "/dev/nbd1"
00:04:59.621    }
00:04:59.621  ]'
00:04:59.621     06:15:16	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:04:59.621    06:15:16	-- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:04:59.621  /dev/nbd1'
00:04:59.621     06:15:16	-- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:04:59.621  /dev/nbd1'
00:04:59.621     06:15:16	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:04:59.621    06:15:16	-- bdev/nbd_common.sh@65 -- # count=2
00:04:59.621    06:15:16	-- bdev/nbd_common.sh@66 -- # echo 2
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@95 -- # count=2
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@71 -- # local operation=write
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:04:59.621  256+0 records in
00:04:59.621  256+0 records out
00:04:59.621  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00662029 s, 158 MB/s
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:04:59.621  256+0 records in
00:04:59.621  256+0 records out
00:04:59.621  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247931 s, 42.3 MB/s
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:04:59.621  256+0 records in
00:04:59.621  256+0 records out
00:04:59.621  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260469 s, 40.3 MB/s
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@51 -- # local i
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:04:59.621   06:15:16	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:04:59.881    06:15:16	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:05:00.139   06:15:16	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:05:00.139   06:15:16	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:05:00.140   06:15:16	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:00.140   06:15:16	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:00.140   06:15:16	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:05:00.140   06:15:16	-- bdev/nbd_common.sh@41 -- # break
00:05:00.140   06:15:16	-- bdev/nbd_common.sh@45 -- # return 0
00:05:00.140   06:15:16	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:00.140   06:15:16	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:05:00.140    06:15:17	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:05:00.140   06:15:17	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:05:00.140   06:15:17	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:05:00.140   06:15:17	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:00.140   06:15:17	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:00.140   06:15:17	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:05:00.140   06:15:17	-- bdev/nbd_common.sh@41 -- # break
00:05:00.140   06:15:17	-- bdev/nbd_common.sh@45 -- # return 0
00:05:00.140    06:15:17	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:05:00.140    06:15:17	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:00.140     06:15:17	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:05:00.708    06:15:17	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:05:00.708     06:15:17	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:05:00.708     06:15:17	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:05:00.708    06:15:17	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:05:00.708     06:15:17	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:05:00.708     06:15:17	-- bdev/nbd_common.sh@65 -- # echo ''
00:05:00.708     06:15:17	-- bdev/nbd_common.sh@65 -- # true
00:05:00.708    06:15:17	-- bdev/nbd_common.sh@65 -- # count=0
00:05:00.708    06:15:17	-- bdev/nbd_common.sh@66 -- # echo 0
00:05:00.708   06:15:17	-- bdev/nbd_common.sh@104 -- # count=0
00:05:00.708   06:15:17	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:05:00.708   06:15:17	-- bdev/nbd_common.sh@109 -- # return 0
00:05:00.708   06:15:17	-- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:05:00.968   06:15:17	-- event/event.sh@35 -- # sleep 3
00:05:01.228  [2024-12-16 06:15:17.955253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:05:01.228  [2024-12-16 06:15:18.016823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:05:01.228  [2024-12-16 06:15:18.016834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:01.228  [2024-12-16 06:15:18.068238] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:05:01.228  [2024-12-16 06:15:18.068316] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:05:03.888   06:15:20	-- event/event.sh@23 -- # for i in {0..2}
00:05:03.888  spdk_app_start Round 1
00:05:03.888   06:15:20	-- event/event.sh@24 -- # echo 'spdk_app_start Round 1'
00:05:03.888   06:15:20	-- event/event.sh@25 -- # waitforlisten 56803 /var/tmp/spdk-nbd.sock
00:05:03.888   06:15:20	-- common/autotest_common.sh@829 -- # '[' -z 56803 ']'
00:05:03.888   06:15:20	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:05:03.888   06:15:20	-- common/autotest_common.sh@834 -- # local max_retries=100
00:05:03.888  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:05:03.888   06:15:20	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:05:03.888   06:15:20	-- common/autotest_common.sh@838 -- # xtrace_disable
00:05:03.888   06:15:20	-- common/autotest_common.sh@10 -- # set +x
00:05:04.186   06:15:21	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:05:04.187   06:15:21	-- common/autotest_common.sh@862 -- # return 0
00:05:04.187   06:15:21	-- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:05:04.451  Malloc0
00:05:04.451   06:15:21	-- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:05:04.717  Malloc1
00:05:04.717   06:15:21	-- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:05:04.717   06:15:21	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:04.717   06:15:21	-- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:05:04.717   06:15:21	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:05:04.717   06:15:21	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:04.717   06:15:21	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:05:04.717   06:15:21	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:05:04.717   06:15:21	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:04.717   06:15:21	-- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:05:04.717   06:15:21	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:05:04.717   06:15:21	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:04.718   06:15:21	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:05:04.718   06:15:21	-- bdev/nbd_common.sh@12 -- # local i
00:05:04.718   06:15:21	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:05:04.718   06:15:21	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:05:04.718   06:15:21	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:05:04.976  /dev/nbd0
00:05:04.976    06:15:21	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:05:04.976   06:15:21	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:05:04.976   06:15:21	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:05:04.976   06:15:21	-- common/autotest_common.sh@867 -- # local i
00:05:04.976   06:15:21	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:05:04.976   06:15:21	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:05:04.976   06:15:21	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:05:04.976   06:15:21	-- common/autotest_common.sh@871 -- # break
00:05:04.976   06:15:21	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:05:04.976   06:15:21	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:05:04.976   06:15:21	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:05:04.976  1+0 records in
00:05:04.976  1+0 records out
00:05:04.976  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249335 s, 16.4 MB/s
00:05:04.976    06:15:21	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:05:04.976   06:15:21	-- common/autotest_common.sh@884 -- # size=4096
00:05:04.976   06:15:21	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:05:04.976   06:15:21	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:05:04.976   06:15:21	-- common/autotest_common.sh@887 -- # return 0
00:05:04.976   06:15:21	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:05:04.976   06:15:21	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:05:04.976   06:15:21	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:05:05.235  /dev/nbd1
00:05:05.235    06:15:22	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:05:05.235   06:15:22	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:05:05.235   06:15:22	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:05:05.235   06:15:22	-- common/autotest_common.sh@867 -- # local i
00:05:05.235   06:15:22	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:05:05.235   06:15:22	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:05:05.235   06:15:22	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:05:05.235   06:15:22	-- common/autotest_common.sh@871 -- # break
00:05:05.235   06:15:22	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:05:05.235   06:15:22	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:05:05.235   06:15:22	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:05:05.235  1+0 records in
00:05:05.235  1+0 records out
00:05:05.235  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298182 s, 13.7 MB/s
00:05:05.235    06:15:22	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:05:05.235   06:15:22	-- common/autotest_common.sh@884 -- # size=4096
00:05:05.235   06:15:22	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:05:05.235   06:15:22	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:05:05.235   06:15:22	-- common/autotest_common.sh@887 -- # return 0
00:05:05.235   06:15:22	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:05:05.235   06:15:22	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:05:05.235    06:15:22	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:05:05.235    06:15:22	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:05.235     06:15:22	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:05:05.493    06:15:22	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:05:05.493    {
00:05:05.493      "bdev_name": "Malloc0",
00:05:05.493      "nbd_device": "/dev/nbd0"
00:05:05.493    },
00:05:05.493    {
00:05:05.493      "bdev_name": "Malloc1",
00:05:05.493      "nbd_device": "/dev/nbd1"
00:05:05.493    }
00:05:05.493  ]'
00:05:05.493     06:15:22	-- bdev/nbd_common.sh@64 -- # echo '[
00:05:05.493    {
00:05:05.493      "bdev_name": "Malloc0",
00:05:05.493      "nbd_device": "/dev/nbd0"
00:05:05.493    },
00:05:05.493    {
00:05:05.493      "bdev_name": "Malloc1",
00:05:05.493      "nbd_device": "/dev/nbd1"
00:05:05.493    }
00:05:05.493  ]'
00:05:05.493     06:15:22	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:05:05.493    06:15:22	-- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:05:05.493  /dev/nbd1'
00:05:05.493     06:15:22	-- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:05:05.493  /dev/nbd1'
00:05:05.493     06:15:22	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:05:05.493    06:15:22	-- bdev/nbd_common.sh@65 -- # count=2
00:05:05.493    06:15:22	-- bdev/nbd_common.sh@66 -- # echo 2
00:05:05.493   06:15:22	-- bdev/nbd_common.sh@95 -- # count=2
00:05:05.493   06:15:22	-- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:05:05.493   06:15:22	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:05:05.493   06:15:22	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:05.493   06:15:22	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:05:05.493   06:15:22	-- bdev/nbd_common.sh@71 -- # local operation=write
00:05:05.493   06:15:22	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:05:05.493   06:15:22	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:05:05.493   06:15:22	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:05:05.493  256+0 records in
00:05:05.493  256+0 records out
00:05:05.493  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00887085 s, 118 MB/s
00:05:05.493   06:15:22	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:05:05.493   06:15:22	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:05:05.493  256+0 records in
00:05:05.493  256+0 records out
00:05:05.493  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233534 s, 44.9 MB/s
00:05:05.493   06:15:22	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:05:05.493   06:15:22	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:05:05.493  256+0 records in
00:05:05.493  256+0 records out
00:05:05.493  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228695 s, 45.9 MB/s
00:05:05.493   06:15:22	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:05:05.493   06:15:22	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:05.493   06:15:22	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:05:05.493   06:15:22	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:05:05.493   06:15:22	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:05:05.493   06:15:22	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:05:05.494   06:15:22	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:05:05.494   06:15:22	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:05:05.494   06:15:22	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:05:05.494   06:15:22	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:05:05.494   06:15:22	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:05:05.752   06:15:22	-- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:05:05.752   06:15:22	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:05:05.752   06:15:22	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:05.752   06:15:22	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:05.752   06:15:22	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:05:05.752   06:15:22	-- bdev/nbd_common.sh@51 -- # local i
00:05:05.752   06:15:22	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:05.752   06:15:22	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:05:05.752    06:15:22	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:05:05.752   06:15:22	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:05:05.752   06:15:22	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:05:05.752   06:15:22	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:05.752   06:15:22	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:05.752   06:15:22	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:05:05.752   06:15:22	-- bdev/nbd_common.sh@41 -- # break
00:05:05.752   06:15:22	-- bdev/nbd_common.sh@45 -- # return 0
00:05:05.752   06:15:22	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:05.752   06:15:22	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:05:06.320    06:15:23	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:05:06.320   06:15:23	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:05:06.320   06:15:23	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:05:06.320   06:15:23	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:06.320   06:15:23	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:06.320   06:15:23	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:05:06.320   06:15:23	-- bdev/nbd_common.sh@41 -- # break
00:05:06.320   06:15:23	-- bdev/nbd_common.sh@45 -- # return 0
00:05:06.320    06:15:23	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:05:06.320    06:15:23	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:06.320     06:15:23	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:05:06.579    06:15:23	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:05:06.580     06:15:23	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:05:06.580     06:15:23	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:05:06.580    06:15:23	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:05:06.580     06:15:23	-- bdev/nbd_common.sh@65 -- # echo ''
00:05:06.580     06:15:23	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:05:06.580     06:15:23	-- bdev/nbd_common.sh@65 -- # true
00:05:06.580    06:15:23	-- bdev/nbd_common.sh@65 -- # count=0
00:05:06.580    06:15:23	-- bdev/nbd_common.sh@66 -- # echo 0
00:05:06.580   06:15:23	-- bdev/nbd_common.sh@104 -- # count=0
00:05:06.580   06:15:23	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:05:06.580   06:15:23	-- bdev/nbd_common.sh@109 -- # return 0
00:05:06.580   06:15:23	-- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:05:06.839   06:15:23	-- event/event.sh@35 -- # sleep 3
00:05:07.098  [2024-12-16 06:15:23.852338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:05:07.098  [2024-12-16 06:15:23.911131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:05:07.098  [2024-12-16 06:15:23.911141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:07.098  [2024-12-16 06:15:23.963378] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:05:07.098  [2024-12-16 06:15:23.963464] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:05:10.388  spdk_app_start Round 2
00:05:10.388  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:05:10.388   06:15:26	-- event/event.sh@23 -- # for i in {0..2}
00:05:10.388   06:15:26	-- event/event.sh@24 -- # echo 'spdk_app_start Round 2'
00:05:10.388   06:15:26	-- event/event.sh@25 -- # waitforlisten 56803 /var/tmp/spdk-nbd.sock
00:05:10.388   06:15:26	-- common/autotest_common.sh@829 -- # '[' -z 56803 ']'
00:05:10.388   06:15:26	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:05:10.388   06:15:26	-- common/autotest_common.sh@834 -- # local max_retries=100
00:05:10.388   06:15:26	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:05:10.388   06:15:26	-- common/autotest_common.sh@838 -- # xtrace_disable
00:05:10.388   06:15:26	-- common/autotest_common.sh@10 -- # set +x
00:05:10.388   06:15:26	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:05:10.388   06:15:26	-- common/autotest_common.sh@862 -- # return 0
00:05:10.388   06:15:26	-- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:05:10.388  Malloc0
00:05:10.388   06:15:27	-- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:05:10.647  Malloc1
00:05:10.647   06:15:27	-- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:05:10.647   06:15:27	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:10.647   06:15:27	-- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:05:10.647   06:15:27	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:05:10.648   06:15:27	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:10.648   06:15:27	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:05:10.648   06:15:27	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:05:10.648   06:15:27	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:10.648   06:15:27	-- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:05:10.648   06:15:27	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:05:10.648   06:15:27	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:10.648   06:15:27	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:05:10.648   06:15:27	-- bdev/nbd_common.sh@12 -- # local i
00:05:10.648   06:15:27	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:05:10.648   06:15:27	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:05:10.648   06:15:27	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:05:10.907  /dev/nbd0
00:05:10.907    06:15:27	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:05:10.907   06:15:27	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:05:10.907   06:15:27	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:05:10.907   06:15:27	-- common/autotest_common.sh@867 -- # local i
00:05:10.907   06:15:27	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:05:10.907   06:15:27	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:05:10.907   06:15:27	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:05:10.907   06:15:27	-- common/autotest_common.sh@871 -- # break
00:05:10.907   06:15:27	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:05:10.907   06:15:27	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:05:10.907   06:15:27	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:05:10.907  1+0 records in
00:05:10.907  1+0 records out
00:05:10.907  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180131 s, 22.7 MB/s
00:05:10.907    06:15:27	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:05:10.907   06:15:27	-- common/autotest_common.sh@884 -- # size=4096
00:05:10.907   06:15:27	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:05:10.907   06:15:27	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:05:10.907   06:15:27	-- common/autotest_common.sh@887 -- # return 0
00:05:10.907   06:15:27	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:05:10.907   06:15:27	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:05:10.907   06:15:27	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:05:11.167  /dev/nbd1
00:05:11.167    06:15:27	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:05:11.167   06:15:27	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:05:11.167   06:15:27	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:05:11.167   06:15:27	-- common/autotest_common.sh@867 -- # local i
00:05:11.167   06:15:27	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:05:11.167   06:15:27	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:05:11.167   06:15:27	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:05:11.167   06:15:27	-- common/autotest_common.sh@871 -- # break
00:05:11.167   06:15:27	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:05:11.167   06:15:27	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:05:11.167   06:15:27	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:05:11.167  1+0 records in
00:05:11.167  1+0 records out
00:05:11.167  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285159 s, 14.4 MB/s
00:05:11.167    06:15:28	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:05:11.167   06:15:28	-- common/autotest_common.sh@884 -- # size=4096
00:05:11.167   06:15:28	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:05:11.167   06:15:28	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:05:11.167   06:15:28	-- common/autotest_common.sh@887 -- # return 0
00:05:11.167   06:15:28	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:05:11.167   06:15:28	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:05:11.167    06:15:28	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:05:11.167    06:15:28	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:11.167     06:15:28	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:05:11.426    06:15:28	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:05:11.426    {
00:05:11.426      "bdev_name": "Malloc0",
00:05:11.426      "nbd_device": "/dev/nbd0"
00:05:11.426    },
00:05:11.426    {
00:05:11.426      "bdev_name": "Malloc1",
00:05:11.426      "nbd_device": "/dev/nbd1"
00:05:11.426    }
00:05:11.426  ]'
00:05:11.426     06:15:28	-- bdev/nbd_common.sh@64 -- # echo '[
00:05:11.426    {
00:05:11.426      "bdev_name": "Malloc0",
00:05:11.426      "nbd_device": "/dev/nbd0"
00:05:11.426    },
00:05:11.426    {
00:05:11.426      "bdev_name": "Malloc1",
00:05:11.426      "nbd_device": "/dev/nbd1"
00:05:11.426    }
00:05:11.426  ]'
00:05:11.426     06:15:28	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:05:11.426    06:15:28	-- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:05:11.426  /dev/nbd1'
00:05:11.426     06:15:28	-- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:05:11.426  /dev/nbd1'
00:05:11.426     06:15:28	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:05:11.426    06:15:28	-- bdev/nbd_common.sh@65 -- # count=2
00:05:11.426    06:15:28	-- bdev/nbd_common.sh@66 -- # echo 2
00:05:11.426   06:15:28	-- bdev/nbd_common.sh@95 -- # count=2
00:05:11.426   06:15:28	-- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:05:11.426   06:15:28	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:05:11.426   06:15:28	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:11.426   06:15:28	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:05:11.426   06:15:28	-- bdev/nbd_common.sh@71 -- # local operation=write
00:05:11.426   06:15:28	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:05:11.426   06:15:28	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:05:11.426   06:15:28	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:05:11.426  256+0 records in
00:05:11.426  256+0 records out
00:05:11.427  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00657821 s, 159 MB/s
00:05:11.427   06:15:28	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:05:11.427   06:15:28	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:05:11.686  256+0 records in
00:05:11.686  256+0 records out
00:05:11.686  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251024 s, 41.8 MB/s
00:05:11.686   06:15:28	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:05:11.686   06:15:28	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:05:11.686  256+0 records in
00:05:11.686  256+0 records out
00:05:11.686  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257384 s, 40.7 MB/s
00:05:11.686   06:15:28	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:05:11.686   06:15:28	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:11.686   06:15:28	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:05:11.686   06:15:28	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:05:11.686   06:15:28	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:05:11.686   06:15:28	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:05:11.686   06:15:28	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:05:11.686   06:15:28	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:05:11.686   06:15:28	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:05:11.686   06:15:28	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:05:11.686   06:15:28	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:05:11.686   06:15:28	-- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:05:11.686   06:15:28	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:05:11.686   06:15:28	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:11.686   06:15:28	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:11.686   06:15:28	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:05:11.686   06:15:28	-- bdev/nbd_common.sh@51 -- # local i
00:05:11.686   06:15:28	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:11.686   06:15:28	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:05:11.945    06:15:28	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:05:11.945   06:15:28	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:05:11.945   06:15:28	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:05:11.945   06:15:28	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:11.945   06:15:28	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:11.945   06:15:28	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:05:11.945   06:15:28	-- bdev/nbd_common.sh@41 -- # break
00:05:11.945   06:15:28	-- bdev/nbd_common.sh@45 -- # return 0
00:05:11.945   06:15:28	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:11.945   06:15:28	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:05:12.204    06:15:28	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:05:12.204   06:15:28	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:05:12.204   06:15:28	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:05:12.204   06:15:28	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:12.204   06:15:29	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:12.204   06:15:29	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:05:12.204   06:15:29	-- bdev/nbd_common.sh@41 -- # break
00:05:12.204   06:15:29	-- bdev/nbd_common.sh@45 -- # return 0
00:05:12.204    06:15:29	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:05:12.204    06:15:29	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:12.204     06:15:29	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:05:12.463    06:15:29	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:05:12.463     06:15:29	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:05:12.463     06:15:29	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:05:12.463    06:15:29	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:05:12.463     06:15:29	-- bdev/nbd_common.sh@65 -- # echo ''
00:05:12.463     06:15:29	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:05:12.463     06:15:29	-- bdev/nbd_common.sh@65 -- # true
00:05:12.463    06:15:29	-- bdev/nbd_common.sh@65 -- # count=0
00:05:12.463    06:15:29	-- bdev/nbd_common.sh@66 -- # echo 0
00:05:12.463   06:15:29	-- bdev/nbd_common.sh@104 -- # count=0
00:05:12.463   06:15:29	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:05:12.463   06:15:29	-- bdev/nbd_common.sh@109 -- # return 0
00:05:12.463   06:15:29	-- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:05:12.722   06:15:29	-- event/event.sh@35 -- # sleep 3
00:05:12.982  [2024-12-16 06:15:29.788275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:05:12.982  [2024-12-16 06:15:29.858128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:05:12.982  [2024-12-16 06:15:29.858138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:12.982  [2024-12-16 06:15:29.910784] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:05:12.982  [2024-12-16 06:15:29.910854] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:05:16.271  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:05:16.271   06:15:32	-- event/event.sh@38 -- # waitforlisten 56803 /var/tmp/spdk-nbd.sock
00:05:16.271   06:15:32	-- common/autotest_common.sh@829 -- # '[' -z 56803 ']'
00:05:16.271   06:15:32	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:05:16.271   06:15:32	-- common/autotest_common.sh@834 -- # local max_retries=100
00:05:16.271   06:15:32	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:05:16.271   06:15:32	-- common/autotest_common.sh@838 -- # xtrace_disable
00:05:16.271   06:15:32	-- common/autotest_common.sh@10 -- # set +x
00:05:16.271   06:15:32	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:05:16.271   06:15:32	-- common/autotest_common.sh@862 -- # return 0
00:05:16.271   06:15:32	-- event/event.sh@39 -- # killprocess 56803
00:05:16.271   06:15:32	-- common/autotest_common.sh@936 -- # '[' -z 56803 ']'
00:05:16.271   06:15:32	-- common/autotest_common.sh@940 -- # kill -0 56803
00:05:16.271    06:15:32	-- common/autotest_common.sh@941 -- # uname
00:05:16.271   06:15:32	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:05:16.271    06:15:32	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56803
00:05:16.271  killing process with pid 56803
00:05:16.271   06:15:32	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:05:16.271   06:15:32	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:05:16.271   06:15:32	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 56803'
00:05:16.271   06:15:32	-- common/autotest_common.sh@955 -- # kill 56803
00:05:16.271   06:15:32	-- common/autotest_common.sh@960 -- # wait 56803
00:05:16.271  spdk_app_start is called in Round 0.
00:05:16.271  Shutdown signal received, stop current app iteration
00:05:16.272  Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization...
00:05:16.272  spdk_app_start is called in Round 1.
00:05:16.272  Shutdown signal received, stop current app iteration
00:05:16.272  Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization...
00:05:16.272  spdk_app_start is called in Round 2.
00:05:16.272  Shutdown signal received, stop current app iteration
00:05:16.272  Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization...
00:05:16.272  spdk_app_start is called in Round 3.
00:05:16.272  Shutdown signal received, stop current app iteration
00:05:16.272   06:15:33	-- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT
00:05:16.272   06:15:33	-- event/event.sh@42 -- # return 0
00:05:16.272  
00:05:16.272  real	0m19.016s
00:05:16.272  user	0m42.794s
00:05:16.272  sys	0m2.886s
00:05:16.272   06:15:33	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:16.272  ************************************
00:05:16.272  END TEST app_repeat
00:05:16.272  ************************************
00:05:16.272   06:15:33	-- common/autotest_common.sh@10 -- # set +x
00:05:16.272   06:15:33	-- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 ))
00:05:16.272   06:15:33	-- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh
00:05:16.272   06:15:33	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:16.272   06:15:33	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:16.272   06:15:33	-- common/autotest_common.sh@10 -- # set +x
00:05:16.272  ************************************
00:05:16.272  START TEST cpu_locks
00:05:16.272  ************************************
00:05:16.272   06:15:33	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh
00:05:16.272  * Looking for test storage...
00:05:16.272  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event
00:05:16.272    06:15:33	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:05:16.272     06:15:33	-- common/autotest_common.sh@1690 -- # lcov --version
00:05:16.272     06:15:33	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:05:16.531    06:15:33	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:05:16.531    06:15:33	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:05:16.531    06:15:33	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:05:16.531    06:15:33	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:05:16.531    06:15:33	-- scripts/common.sh@335 -- # IFS=.-:
00:05:16.531    06:15:33	-- scripts/common.sh@335 -- # read -ra ver1
00:05:16.531    06:15:33	-- scripts/common.sh@336 -- # IFS=.-:
00:05:16.531    06:15:33	-- scripts/common.sh@336 -- # read -ra ver2
00:05:16.531    06:15:33	-- scripts/common.sh@337 -- # local 'op=<'
00:05:16.531    06:15:33	-- scripts/common.sh@339 -- # ver1_l=2
00:05:16.531    06:15:33	-- scripts/common.sh@340 -- # ver2_l=1
00:05:16.531    06:15:33	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:05:16.531    06:15:33	-- scripts/common.sh@343 -- # case "$op" in
00:05:16.531    06:15:33	-- scripts/common.sh@344 -- # : 1
00:05:16.531    06:15:33	-- scripts/common.sh@363 -- # (( v = 0 ))
00:05:16.531    06:15:33	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:16.531     06:15:33	-- scripts/common.sh@364 -- # decimal 1
00:05:16.531     06:15:33	-- scripts/common.sh@352 -- # local d=1
00:05:16.531     06:15:33	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:16.532     06:15:33	-- scripts/common.sh@354 -- # echo 1
00:05:16.532    06:15:33	-- scripts/common.sh@364 -- # ver1[v]=1
00:05:16.532     06:15:33	-- scripts/common.sh@365 -- # decimal 2
00:05:16.532     06:15:33	-- scripts/common.sh@352 -- # local d=2
00:05:16.532     06:15:33	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:16.532     06:15:33	-- scripts/common.sh@354 -- # echo 2
00:05:16.532    06:15:33	-- scripts/common.sh@365 -- # ver2[v]=2
00:05:16.532    06:15:33	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:05:16.532    06:15:33	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:05:16.532    06:15:33	-- scripts/common.sh@367 -- # return 0
00:05:16.532    06:15:33	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:16.532    06:15:33	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:05:16.532  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:16.532  		--rc genhtml_branch_coverage=1
00:05:16.532  		--rc genhtml_function_coverage=1
00:05:16.532  		--rc genhtml_legend=1
00:05:16.532  		--rc geninfo_all_blocks=1
00:05:16.532  		--rc geninfo_unexecuted_blocks=1
00:05:16.532  		
00:05:16.532  		'
00:05:16.532    06:15:33	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:05:16.532  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:16.532  		--rc genhtml_branch_coverage=1
00:05:16.532  		--rc genhtml_function_coverage=1
00:05:16.532  		--rc genhtml_legend=1
00:05:16.532  		--rc geninfo_all_blocks=1
00:05:16.532  		--rc geninfo_unexecuted_blocks=1
00:05:16.532  		
00:05:16.532  		'
00:05:16.532    06:15:33	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:05:16.532  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:16.532  		--rc genhtml_branch_coverage=1
00:05:16.532  		--rc genhtml_function_coverage=1
00:05:16.532  		--rc genhtml_legend=1
00:05:16.532  		--rc geninfo_all_blocks=1
00:05:16.532  		--rc geninfo_unexecuted_blocks=1
00:05:16.532  		
00:05:16.532  		'
00:05:16.532    06:15:33	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:05:16.532  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:16.532  		--rc genhtml_branch_coverage=1
00:05:16.532  		--rc genhtml_function_coverage=1
00:05:16.532  		--rc genhtml_legend=1
00:05:16.532  		--rc geninfo_all_blocks=1
00:05:16.532  		--rc geninfo_unexecuted_blocks=1
00:05:16.532  		
00:05:16.532  		'
00:05:16.532   06:15:33	-- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock
00:05:16.532   06:15:33	-- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock
00:05:16.532   06:15:33	-- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT
00:05:16.532   06:15:33	-- event/cpu_locks.sh@166 -- # run_test default_locks default_locks
00:05:16.532   06:15:33	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:16.532   06:15:33	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:16.532   06:15:33	-- common/autotest_common.sh@10 -- # set +x
00:05:16.532  ************************************
00:05:16.532  START TEST default_locks
00:05:16.532  ************************************
00:05:16.532   06:15:33	-- common/autotest_common.sh@1114 -- # default_locks
00:05:16.532   06:15:33	-- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57441
00:05:16.532   06:15:33	-- event/cpu_locks.sh@47 -- # waitforlisten 57441
00:05:16.532   06:15:33	-- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:05:16.532   06:15:33	-- common/autotest_common.sh@829 -- # '[' -z 57441 ']'
00:05:16.532   06:15:33	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:16.532  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:16.532   06:15:33	-- common/autotest_common.sh@834 -- # local max_retries=100
00:05:16.532   06:15:33	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:16.532   06:15:33	-- common/autotest_common.sh@838 -- # xtrace_disable
00:05:16.532   06:15:33	-- common/autotest_common.sh@10 -- # set +x
00:05:16.532  [2024-12-16 06:15:33.398942] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:16.532  [2024-12-16 06:15:33.399062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57441 ]
00:05:16.816  [2024-12-16 06:15:33.534850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:16.816  [2024-12-16 06:15:33.612397] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:05:16.816  [2024-12-16 06:15:33.612603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:17.785   06:15:34	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:05:17.785   06:15:34	-- common/autotest_common.sh@862 -- # return 0
00:05:17.785   06:15:34	-- event/cpu_locks.sh@49 -- # locks_exist 57441
00:05:17.785   06:15:34	-- event/cpu_locks.sh@22 -- # lslocks -p 57441
00:05:17.785   06:15:34	-- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:05:18.044   06:15:34	-- event/cpu_locks.sh@50 -- # killprocess 57441
00:05:18.044   06:15:34	-- common/autotest_common.sh@936 -- # '[' -z 57441 ']'
00:05:18.044   06:15:34	-- common/autotest_common.sh@940 -- # kill -0 57441
00:05:18.044    06:15:34	-- common/autotest_common.sh@941 -- # uname
00:05:18.044   06:15:34	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:05:18.044    06:15:34	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57441
00:05:18.045   06:15:34	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:05:18.045  killing process with pid 57441
00:05:18.045   06:15:34	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:05:18.045   06:15:34	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 57441'
00:05:18.045   06:15:34	-- common/autotest_common.sh@955 -- # kill 57441
00:05:18.045   06:15:34	-- common/autotest_common.sh@960 -- # wait 57441
00:05:18.304   06:15:35	-- event/cpu_locks.sh@52 -- # NOT waitforlisten 57441
00:05:18.304   06:15:35	-- common/autotest_common.sh@650 -- # local es=0
00:05:18.304   06:15:35	-- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57441
00:05:18.304   06:15:35	-- common/autotest_common.sh@638 -- # local arg=waitforlisten
00:05:18.304   06:15:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:05:18.304    06:15:35	-- common/autotest_common.sh@642 -- # type -t waitforlisten
00:05:18.304   06:15:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:05:18.304   06:15:35	-- common/autotest_common.sh@653 -- # waitforlisten 57441
00:05:18.304   06:15:35	-- common/autotest_common.sh@829 -- # '[' -z 57441 ']'
00:05:18.304   06:15:35	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:18.304   06:15:35	-- common/autotest_common.sh@834 -- # local max_retries=100
00:05:18.304  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:18.304   06:15:35	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:18.304   06:15:35	-- common/autotest_common.sh@838 -- # xtrace_disable
00:05:18.304   06:15:35	-- common/autotest_common.sh@10 -- # set +x
00:05:18.304  ERROR: process (pid: 57441) is no longer running
00:05:18.304  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57441) - No such process
00:05:18.304   06:15:35	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:05:18.304   06:15:35	-- common/autotest_common.sh@862 -- # return 1
00:05:18.304   06:15:35	-- common/autotest_common.sh@653 -- # es=1
00:05:18.304   06:15:35	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:05:18.304   06:15:35	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:05:18.304   06:15:35	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:05:18.304   06:15:35	-- event/cpu_locks.sh@54 -- # no_locks
00:05:18.304   06:15:35	-- event/cpu_locks.sh@26 -- # lock_files=()
00:05:18.304   06:15:35	-- event/cpu_locks.sh@26 -- # local lock_files
00:05:18.304   06:15:35	-- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:05:18.304  
00:05:18.304  real	0m1.888s
00:05:18.304  user	0m2.095s
00:05:18.304  sys	0m0.552s
00:05:18.304   06:15:35	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:18.304  ************************************
00:05:18.304  END TEST default_locks
00:05:18.304  ************************************
00:05:18.304   06:15:35	-- common/autotest_common.sh@10 -- # set +x
00:05:18.304   06:15:35	-- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc
00:05:18.304   06:15:35	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:18.304   06:15:35	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:18.304   06:15:35	-- common/autotest_common.sh@10 -- # set +x
00:05:18.304  ************************************
00:05:18.304  START TEST default_locks_via_rpc
00:05:18.304  ************************************
00:05:18.304   06:15:35	-- common/autotest_common.sh@1114 -- # default_locks_via_rpc
00:05:18.304   06:15:35	-- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57505
00:05:18.305   06:15:35	-- event/cpu_locks.sh@63 -- # waitforlisten 57505
00:05:18.305   06:15:35	-- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:05:18.305   06:15:35	-- common/autotest_common.sh@829 -- # '[' -z 57505 ']'
00:05:18.305   06:15:35	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:18.305   06:15:35	-- common/autotest_common.sh@834 -- # local max_retries=100
00:05:18.305  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:18.305   06:15:35	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:18.305   06:15:35	-- common/autotest_common.sh@838 -- # xtrace_disable
00:05:18.305   06:15:35	-- common/autotest_common.sh@10 -- # set +x
00:05:18.563  [2024-12-16 06:15:35.317553] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:18.563  [2024-12-16 06:15:35.317658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57505 ]
00:05:18.563  [2024-12-16 06:15:35.445094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:18.563  [2024-12-16 06:15:35.515504] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:05:18.563  [2024-12-16 06:15:35.515718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:19.501   06:15:36	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:05:19.501   06:15:36	-- common/autotest_common.sh@862 -- # return 0
00:05:19.501   06:15:36	-- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks
00:05:19.501   06:15:36	-- common/autotest_common.sh@561 -- # xtrace_disable
00:05:19.501   06:15:36	-- common/autotest_common.sh@10 -- # set +x
00:05:19.501   06:15:36	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:05:19.501   06:15:36	-- event/cpu_locks.sh@67 -- # no_locks
00:05:19.501   06:15:36	-- event/cpu_locks.sh@26 -- # lock_files=()
00:05:19.501   06:15:36	-- event/cpu_locks.sh@26 -- # local lock_files
00:05:19.501   06:15:36	-- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:05:19.501   06:15:36	-- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks
00:05:19.501   06:15:36	-- common/autotest_common.sh@561 -- # xtrace_disable
00:05:19.501   06:15:36	-- common/autotest_common.sh@10 -- # set +x
00:05:19.501   06:15:36	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:05:19.501   06:15:36	-- event/cpu_locks.sh@71 -- # locks_exist 57505
00:05:19.501   06:15:36	-- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:05:19.501   06:15:36	-- event/cpu_locks.sh@22 -- # lslocks -p 57505
00:05:20.070   06:15:36	-- event/cpu_locks.sh@73 -- # killprocess 57505
00:05:20.070   06:15:36	-- common/autotest_common.sh@936 -- # '[' -z 57505 ']'
00:05:20.070   06:15:36	-- common/autotest_common.sh@940 -- # kill -0 57505
00:05:20.070    06:15:36	-- common/autotest_common.sh@941 -- # uname
00:05:20.070   06:15:36	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:05:20.070    06:15:36	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57505
00:05:20.070   06:15:36	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:05:20.070  killing process with pid 57505
00:05:20.070   06:15:36	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:05:20.070   06:15:36	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 57505'
00:05:20.070   06:15:36	-- common/autotest_common.sh@955 -- # kill 57505
00:05:20.070   06:15:36	-- common/autotest_common.sh@960 -- # wait 57505
00:05:20.330  
00:05:20.330  real	0m1.946s
00:05:20.330  user	0m2.180s
00:05:20.330  sys	0m0.524s
00:05:20.330  ************************************
00:05:20.330  END TEST default_locks_via_rpc
00:05:20.330  ************************************
00:05:20.330   06:15:37	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:20.330   06:15:37	-- common/autotest_common.sh@10 -- # set +x
00:05:20.330   06:15:37	-- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask
00:05:20.330   06:15:37	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:20.330   06:15:37	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:20.330   06:15:37	-- common/autotest_common.sh@10 -- # set +x
00:05:20.330  ************************************
00:05:20.330  START TEST non_locking_app_on_locked_coremask
00:05:20.330  ************************************
00:05:20.330   06:15:37	-- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask
00:05:20.330   06:15:37	-- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57574
00:05:20.330   06:15:37	-- event/cpu_locks.sh@81 -- # waitforlisten 57574 /var/tmp/spdk.sock
00:05:20.330   06:15:37	-- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:05:20.330   06:15:37	-- common/autotest_common.sh@829 -- # '[' -z 57574 ']'
00:05:20.330   06:15:37	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:20.330   06:15:37	-- common/autotest_common.sh@834 -- # local max_retries=100
00:05:20.330  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:20.330   06:15:37	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:20.330   06:15:37	-- common/autotest_common.sh@838 -- # xtrace_disable
00:05:20.330   06:15:37	-- common/autotest_common.sh@10 -- # set +x
00:05:20.590  [2024-12-16 06:15:37.333266] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:20.590  [2024-12-16 06:15:37.333364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57574 ]
00:05:20.590  [2024-12-16 06:15:37.470928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:20.590  [2024-12-16 06:15:37.551300] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:05:20.590  [2024-12-16 06:15:37.551486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:21.529   06:15:38	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:05:21.529   06:15:38	-- common/autotest_common.sh@862 -- # return 0
00:05:21.529   06:15:38	-- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock
00:05:21.529   06:15:38	-- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57602
00:05:21.529   06:15:38	-- event/cpu_locks.sh@85 -- # waitforlisten 57602 /var/tmp/spdk2.sock
00:05:21.529   06:15:38	-- common/autotest_common.sh@829 -- # '[' -z 57602 ']'
00:05:21.529   06:15:38	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:21.529   06:15:38	-- common/autotest_common.sh@834 -- # local max_retries=100
00:05:21.529   06:15:38	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:21.529  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:21.529   06:15:38	-- common/autotest_common.sh@838 -- # xtrace_disable
00:05:21.529   06:15:38	-- common/autotest_common.sh@10 -- # set +x
00:05:21.529  [2024-12-16 06:15:38.366855] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:21.529  [2024-12-16 06:15:38.366974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57602 ]
00:05:21.529  [2024-12-16 06:15:38.499141] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:05:21.529  [2024-12-16 06:15:38.499190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:21.789  [2024-12-16 06:15:38.655835] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:05:21.789  [2024-12-16 06:15:38.656020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:22.726   06:15:39	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:05:22.726   06:15:39	-- common/autotest_common.sh@862 -- # return 0
00:05:22.726   06:15:39	-- event/cpu_locks.sh@87 -- # locks_exist 57574
00:05:22.726   06:15:39	-- event/cpu_locks.sh@22 -- # lslocks -p 57574
00:05:22.726   06:15:39	-- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:05:22.985   06:15:39	-- event/cpu_locks.sh@89 -- # killprocess 57574
00:05:22.985   06:15:39	-- common/autotest_common.sh@936 -- # '[' -z 57574 ']'
00:05:22.985   06:15:39	-- common/autotest_common.sh@940 -- # kill -0 57574
00:05:22.985    06:15:39	-- common/autotest_common.sh@941 -- # uname
00:05:22.985   06:15:39	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:05:22.985    06:15:39	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57574
00:05:22.985   06:15:39	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:05:22.985   06:15:39	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:05:22.985   06:15:39	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 57574'
00:05:22.985  killing process with pid 57574
00:05:22.985   06:15:39	-- common/autotest_common.sh@955 -- # kill 57574
00:05:22.985   06:15:39	-- common/autotest_common.sh@960 -- # wait 57574
00:05:23.924   06:15:40	-- event/cpu_locks.sh@90 -- # killprocess 57602
00:05:23.924   06:15:40	-- common/autotest_common.sh@936 -- # '[' -z 57602 ']'
00:05:23.924   06:15:40	-- common/autotest_common.sh@940 -- # kill -0 57602
00:05:23.924    06:15:40	-- common/autotest_common.sh@941 -- # uname
00:05:23.924   06:15:40	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:05:23.924    06:15:40	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57602
00:05:23.924   06:15:40	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:05:23.924   06:15:40	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:05:23.924  killing process with pid 57602
00:05:23.924   06:15:40	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 57602'
00:05:23.924   06:15:40	-- common/autotest_common.sh@955 -- # kill 57602
00:05:23.924   06:15:40	-- common/autotest_common.sh@960 -- # wait 57602
00:05:24.183  
00:05:24.183  real	0m3.745s
00:05:24.183  user	0m4.226s
00:05:24.183  sys	0m0.931s
00:05:24.183   06:15:41	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:24.183   06:15:41	-- common/autotest_common.sh@10 -- # set +x
00:05:24.183  ************************************
00:05:24.183  END TEST non_locking_app_on_locked_coremask
00:05:24.183  ************************************
00:05:24.183   06:15:41	-- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask
00:05:24.183   06:15:41	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:24.183   06:15:41	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:24.183   06:15:41	-- common/autotest_common.sh@10 -- # set +x
00:05:24.183  ************************************
00:05:24.183  START TEST locking_app_on_unlocked_coremask
00:05:24.183  ************************************
00:05:24.183   06:15:41	-- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask
00:05:24.183   06:15:41	-- event/cpu_locks.sh@98 -- # spdk_tgt_pid=57677
00:05:24.183   06:15:41	-- event/cpu_locks.sh@99 -- # waitforlisten 57677 /var/tmp/spdk.sock
00:05:24.183   06:15:41	-- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks
00:05:24.183   06:15:41	-- common/autotest_common.sh@829 -- # '[' -z 57677 ']'
00:05:24.183   06:15:41	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:24.183   06:15:41	-- common/autotest_common.sh@834 -- # local max_retries=100
00:05:24.183  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:24.183   06:15:41	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:24.183   06:15:41	-- common/autotest_common.sh@838 -- # xtrace_disable
00:05:24.183   06:15:41	-- common/autotest_common.sh@10 -- # set +x
00:05:24.183  [2024-12-16 06:15:41.132570] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:24.183  [2024-12-16 06:15:41.132668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57677 ]
00:05:24.442  [2024-12-16 06:15:41.267568] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:05:24.442  [2024-12-16 06:15:41.267601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:24.442  [2024-12-16 06:15:41.362999] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:05:24.442  [2024-12-16 06:15:41.363182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:25.380   06:15:42	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:05:25.380   06:15:42	-- common/autotest_common.sh@862 -- # return 0
00:05:25.380   06:15:42	-- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=57705
00:05:25.380   06:15:42	-- event/cpu_locks.sh@103 -- # waitforlisten 57705 /var/tmp/spdk2.sock
00:05:25.380   06:15:42	-- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:05:25.380   06:15:42	-- common/autotest_common.sh@829 -- # '[' -z 57705 ']'
00:05:25.380   06:15:42	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:25.380   06:15:42	-- common/autotest_common.sh@834 -- # local max_retries=100
00:05:25.380  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:25.380   06:15:42	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:25.380   06:15:42	-- common/autotest_common.sh@838 -- # xtrace_disable
00:05:25.380   06:15:42	-- common/autotest_common.sh@10 -- # set +x
00:05:25.380  [2024-12-16 06:15:42.193660] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:25.380  [2024-12-16 06:15:42.193761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57705 ]
00:05:25.380  [2024-12-16 06:15:42.334657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:25.640  [2024-12-16 06:15:42.503998] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:05:25.640  [2024-12-16 06:15:42.504124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:26.208   06:15:43	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:05:26.208   06:15:43	-- common/autotest_common.sh@862 -- # return 0
00:05:26.208   06:15:43	-- event/cpu_locks.sh@105 -- # locks_exist 57705
00:05:26.208   06:15:43	-- event/cpu_locks.sh@22 -- # lslocks -p 57705
00:05:26.208   06:15:43	-- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:05:27.146   06:15:43	-- event/cpu_locks.sh@107 -- # killprocess 57677
00:05:27.146   06:15:43	-- common/autotest_common.sh@936 -- # '[' -z 57677 ']'
00:05:27.146   06:15:43	-- common/autotest_common.sh@940 -- # kill -0 57677
00:05:27.146    06:15:43	-- common/autotest_common.sh@941 -- # uname
00:05:27.146   06:15:43	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:05:27.146    06:15:43	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57677
00:05:27.146   06:15:43	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:05:27.146   06:15:43	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:05:27.146  killing process with pid 57677
00:05:27.146   06:15:43	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 57677'
00:05:27.146   06:15:43	-- common/autotest_common.sh@955 -- # kill 57677
00:05:27.146   06:15:43	-- common/autotest_common.sh@960 -- # wait 57677
00:05:28.085   06:15:44	-- event/cpu_locks.sh@108 -- # killprocess 57705
00:05:28.085   06:15:44	-- common/autotest_common.sh@936 -- # '[' -z 57705 ']'
00:05:28.085   06:15:44	-- common/autotest_common.sh@940 -- # kill -0 57705
00:05:28.085    06:15:44	-- common/autotest_common.sh@941 -- # uname
00:05:28.085   06:15:44	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:05:28.085    06:15:44	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57705
00:05:28.085   06:15:44	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:05:28.085   06:15:44	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:05:28.085  killing process with pid 57705
00:05:28.085   06:15:44	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 57705'
00:05:28.085   06:15:44	-- common/autotest_common.sh@955 -- # kill 57705
00:05:28.085   06:15:44	-- common/autotest_common.sh@960 -- # wait 57705
00:05:28.344  
00:05:28.344  real	0m4.062s
00:05:28.344  user	0m4.568s
00:05:28.344  sys	0m1.088s
00:05:28.344   06:15:45	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:28.344   06:15:45	-- common/autotest_common.sh@10 -- # set +x
00:05:28.344  ************************************
00:05:28.344  END TEST locking_app_on_unlocked_coremask
00:05:28.344  ************************************
00:05:28.344   06:15:45	-- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask
00:05:28.344   06:15:45	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:28.344   06:15:45	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:28.344   06:15:45	-- common/autotest_common.sh@10 -- # set +x
00:05:28.344  ************************************
00:05:28.344  START TEST locking_app_on_locked_coremask
00:05:28.344  ************************************
00:05:28.344   06:15:45	-- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask
00:05:28.344   06:15:45	-- event/cpu_locks.sh@115 -- # spdk_tgt_pid=57784
00:05:28.344   06:15:45	-- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:05:28.344   06:15:45	-- event/cpu_locks.sh@116 -- # waitforlisten 57784 /var/tmp/spdk.sock
00:05:28.344   06:15:45	-- common/autotest_common.sh@829 -- # '[' -z 57784 ']'
00:05:28.344   06:15:45	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:28.344   06:15:45	-- common/autotest_common.sh@834 -- # local max_retries=100
00:05:28.344  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:28.344   06:15:45	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:28.344   06:15:45	-- common/autotest_common.sh@838 -- # xtrace_disable
00:05:28.344   06:15:45	-- common/autotest_common.sh@10 -- # set +x
00:05:28.344  [2024-12-16 06:15:45.249642] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:28.344  [2024-12-16 06:15:45.249742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57784 ]
00:05:28.603  [2024-12-16 06:15:45.382195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:28.603  [2024-12-16 06:15:45.453115] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:05:28.603  [2024-12-16 06:15:45.453252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:29.225   06:15:46	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:05:29.225   06:15:46	-- common/autotest_common.sh@862 -- # return 0
00:05:29.225   06:15:46	-- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=57812
00:05:29.225   06:15:46	-- event/cpu_locks.sh@120 -- # NOT waitforlisten 57812 /var/tmp/spdk2.sock
00:05:29.225   06:15:46	-- common/autotest_common.sh@650 -- # local es=0
00:05:29.225   06:15:46	-- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57812 /var/tmp/spdk2.sock
00:05:29.225   06:15:46	-- common/autotest_common.sh@638 -- # local arg=waitforlisten
00:05:29.225   06:15:46	-- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:05:29.225   06:15:46	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:05:29.225    06:15:46	-- common/autotest_common.sh@642 -- # type -t waitforlisten
00:05:29.225   06:15:46	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:05:29.225   06:15:46	-- common/autotest_common.sh@653 -- # waitforlisten 57812 /var/tmp/spdk2.sock
00:05:29.225   06:15:46	-- common/autotest_common.sh@829 -- # '[' -z 57812 ']'
00:05:29.225   06:15:46	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:29.225   06:15:46	-- common/autotest_common.sh@834 -- # local max_retries=100
00:05:29.225  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:29.225   06:15:46	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:29.225   06:15:46	-- common/autotest_common.sh@838 -- # xtrace_disable
00:05:29.225   06:15:46	-- common/autotest_common.sh@10 -- # set +x
00:05:29.484  [2024-12-16 06:15:46.230797] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:29.484  [2024-12-16 06:15:46.230895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57812 ]
00:05:29.484  [2024-12-16 06:15:46.370116] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 57784 has claimed it.
00:05:29.484  [2024-12-16 06:15:46.370173] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:05:30.052  ERROR: process (pid: 57812) is no longer running
00:05:30.052  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57812) - No such process
00:05:30.052   06:15:46	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:05:30.052   06:15:46	-- common/autotest_common.sh@862 -- # return 1
00:05:30.052   06:15:46	-- common/autotest_common.sh@653 -- # es=1
00:05:30.052   06:15:46	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:05:30.052   06:15:46	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:05:30.052   06:15:46	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:05:30.052   06:15:46	-- event/cpu_locks.sh@122 -- # locks_exist 57784
00:05:30.052   06:15:46	-- event/cpu_locks.sh@22 -- # lslocks -p 57784
00:05:30.052   06:15:46	-- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:05:30.311   06:15:47	-- event/cpu_locks.sh@124 -- # killprocess 57784
00:05:30.311   06:15:47	-- common/autotest_common.sh@936 -- # '[' -z 57784 ']'
00:05:30.311   06:15:47	-- common/autotest_common.sh@940 -- # kill -0 57784
00:05:30.311    06:15:47	-- common/autotest_common.sh@941 -- # uname
00:05:30.311   06:15:47	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:05:30.311    06:15:47	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57784
00:05:30.311   06:15:47	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:05:30.311   06:15:47	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:05:30.311  killing process with pid 57784
00:05:30.311   06:15:47	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 57784'
00:05:30.311   06:15:47	-- common/autotest_common.sh@955 -- # kill 57784
00:05:30.311   06:15:47	-- common/autotest_common.sh@960 -- # wait 57784
00:05:30.878  
00:05:30.878  real	0m2.447s
00:05:30.879  user	0m2.840s
00:05:30.879  sys	0m0.535s
00:05:30.879   06:15:47	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:30.879   06:15:47	-- common/autotest_common.sh@10 -- # set +x
00:05:30.879  ************************************
00:05:30.879  END TEST locking_app_on_locked_coremask
00:05:30.879  ************************************
00:05:30.879   06:15:47	-- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask
00:05:30.879   06:15:47	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:30.879   06:15:47	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:30.879   06:15:47	-- common/autotest_common.sh@10 -- # set +x
00:05:30.879  ************************************
00:05:30.879  START TEST locking_overlapped_coremask
00:05:30.879  ************************************
00:05:30.879   06:15:47	-- common/autotest_common.sh@1114 -- # locking_overlapped_coremask
00:05:30.879   06:15:47	-- event/cpu_locks.sh@132 -- # spdk_tgt_pid=57864
00:05:30.879   06:15:47	-- event/cpu_locks.sh@133 -- # waitforlisten 57864 /var/tmp/spdk.sock
00:05:30.879   06:15:47	-- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7
00:05:30.879   06:15:47	-- common/autotest_common.sh@829 -- # '[' -z 57864 ']'
00:05:30.879   06:15:47	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:30.879   06:15:47	-- common/autotest_common.sh@834 -- # local max_retries=100
00:05:30.879  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:30.879   06:15:47	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:30.879   06:15:47	-- common/autotest_common.sh@838 -- # xtrace_disable
00:05:30.879   06:15:47	-- common/autotest_common.sh@10 -- # set +x
00:05:30.879  [2024-12-16 06:15:47.734724] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:30.879  [2024-12-16 06:15:47.734974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57864 ]
00:05:31.138  [2024-12-16 06:15:47.865061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:05:31.138  [2024-12-16 06:15:47.938948] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:05:31.138  [2024-12-16 06:15:47.939541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:05:31.138  [2024-12-16 06:15:47.939684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:05:31.138  [2024-12-16 06:15:47.939687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:32.075   06:15:48	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:05:32.075   06:15:48	-- common/autotest_common.sh@862 -- # return 0
00:05:32.075   06:15:48	-- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock
00:05:32.075   06:15:48	-- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=57894
00:05:32.075   06:15:48	-- event/cpu_locks.sh@137 -- # NOT waitforlisten 57894 /var/tmp/spdk2.sock
00:05:32.075   06:15:48	-- common/autotest_common.sh@650 -- # local es=0
00:05:32.075   06:15:48	-- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57894 /var/tmp/spdk2.sock
00:05:32.075   06:15:48	-- common/autotest_common.sh@638 -- # local arg=waitforlisten
00:05:32.075   06:15:48	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:05:32.075    06:15:48	-- common/autotest_common.sh@642 -- # type -t waitforlisten
00:05:32.075  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:32.075   06:15:48	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:05:32.075   06:15:48	-- common/autotest_common.sh@653 -- # waitforlisten 57894 /var/tmp/spdk2.sock
00:05:32.075   06:15:48	-- common/autotest_common.sh@829 -- # '[' -z 57894 ']'
00:05:32.075   06:15:48	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:32.075   06:15:48	-- common/autotest_common.sh@834 -- # local max_retries=100
00:05:32.075   06:15:48	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:32.075   06:15:48	-- common/autotest_common.sh@838 -- # xtrace_disable
00:05:32.075   06:15:48	-- common/autotest_common.sh@10 -- # set +x
00:05:32.075  [2024-12-16 06:15:48.799911] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:32.075  [2024-12-16 06:15:48.799979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57894 ]
00:05:32.075  [2024-12-16 06:15:48.937551] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57864 has claimed it.
00:05:32.075  [2024-12-16 06:15:48.937616] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:05:32.644  ERROR: process (pid: 57894) is no longer running
00:05:32.644  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57894) - No such process
00:05:32.644   06:15:49	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:05:32.644   06:15:49	-- common/autotest_common.sh@862 -- # return 1
00:05:32.644   06:15:49	-- common/autotest_common.sh@653 -- # es=1
00:05:32.644   06:15:49	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:05:32.644   06:15:49	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:05:32.644   06:15:49	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:05:32.644   06:15:49	-- event/cpu_locks.sh@139 -- # check_remaining_locks
00:05:32.644   06:15:49	-- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:05:32.644   06:15:49	-- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:05:32.644   06:15:49	-- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:05:32.644   06:15:49	-- event/cpu_locks.sh@141 -- # killprocess 57864
00:05:32.644   06:15:49	-- common/autotest_common.sh@936 -- # '[' -z 57864 ']'
00:05:32.644   06:15:49	-- common/autotest_common.sh@940 -- # kill -0 57864
00:05:32.644    06:15:49	-- common/autotest_common.sh@941 -- # uname
00:05:32.644   06:15:49	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:05:32.644    06:15:49	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57864
00:05:32.644   06:15:49	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:05:32.644   06:15:49	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:05:32.644   06:15:49	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 57864'
00:05:32.644  killing process with pid 57864
00:05:32.644   06:15:49	-- common/autotest_common.sh@955 -- # kill 57864
00:05:32.644   06:15:49	-- common/autotest_common.sh@960 -- # wait 57864
00:05:33.212  
00:05:33.212  real	0m2.282s
00:05:33.212  user	0m6.515s
00:05:33.212  sys	0m0.419s
00:05:33.212   06:15:49	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:33.212   06:15:49	-- common/autotest_common.sh@10 -- # set +x
00:05:33.212  ************************************
00:05:33.212  END TEST locking_overlapped_coremask
00:05:33.212  ************************************
00:05:33.212   06:15:50	-- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc
00:05:33.212   06:15:50	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:33.212   06:15:50	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:33.212   06:15:50	-- common/autotest_common.sh@10 -- # set +x
00:05:33.212  ************************************
00:05:33.212  START TEST locking_overlapped_coremask_via_rpc
00:05:33.212  ************************************
00:05:33.213   06:15:50	-- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc
00:05:33.213  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:33.213   06:15:50	-- event/cpu_locks.sh@148 -- # spdk_tgt_pid=57940
00:05:33.213   06:15:50	-- event/cpu_locks.sh@149 -- # waitforlisten 57940 /var/tmp/spdk.sock
00:05:33.213   06:15:50	-- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks
00:05:33.213   06:15:50	-- common/autotest_common.sh@829 -- # '[' -z 57940 ']'
00:05:33.213   06:15:50	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:33.213   06:15:50	-- common/autotest_common.sh@834 -- # local max_retries=100
00:05:33.213   06:15:50	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:33.213   06:15:50	-- common/autotest_common.sh@838 -- # xtrace_disable
00:05:33.213   06:15:50	-- common/autotest_common.sh@10 -- # set +x
00:05:33.213  [2024-12-16 06:15:50.069959] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:33.213  [2024-12-16 06:15:50.070046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57940 ]
00:05:33.471  [2024-12-16 06:15:50.198038] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:05:33.471  [2024-12-16 06:15:50.198070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:05:33.471  [2024-12-16 06:15:50.270845] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:05:33.471  [2024-12-16 06:15:50.271528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:05:33.471  [2024-12-16 06:15:50.271649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:05:33.471  [2024-12-16 06:15:50.271655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:34.408   06:15:51	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:05:34.408   06:15:51	-- common/autotest_common.sh@862 -- # return 0
00:05:34.408   06:15:51	-- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks
00:05:34.408   06:15:51	-- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=57970
00:05:34.408  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:34.408   06:15:51	-- event/cpu_locks.sh@153 -- # waitforlisten 57970 /var/tmp/spdk2.sock
00:05:34.408   06:15:51	-- common/autotest_common.sh@829 -- # '[' -z 57970 ']'
00:05:34.408   06:15:51	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:34.408   06:15:51	-- common/autotest_common.sh@834 -- # local max_retries=100
00:05:34.408   06:15:51	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:34.408   06:15:51	-- common/autotest_common.sh@838 -- # xtrace_disable
00:05:34.408   06:15:51	-- common/autotest_common.sh@10 -- # set +x
00:05:34.408  [2024-12-16 06:15:51.080086] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:34.408  [2024-12-16 06:15:51.080332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57970 ]
00:05:34.408  [2024-12-16 06:15:51.215496] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:05:34.408  [2024-12-16 06:15:51.219554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:05:34.667  [2024-12-16 06:15:51.384387] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:05:34.667  [2024-12-16 06:15:51.385195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:05:34.667  [2024-12-16 06:15:51.388613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:05:34.667  [2024-12-16 06:15:51.388614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4
00:05:35.234   06:15:52	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:05:35.234   06:15:52	-- common/autotest_common.sh@862 -- # return 0
00:05:35.234   06:15:52	-- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks
00:05:35.234   06:15:52	-- common/autotest_common.sh@561 -- # xtrace_disable
00:05:35.234   06:15:52	-- common/autotest_common.sh@10 -- # set +x
00:05:35.234   06:15:52	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:05:35.234   06:15:52	-- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:05:35.234   06:15:52	-- common/autotest_common.sh@650 -- # local es=0
00:05:35.234   06:15:52	-- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:05:35.234   06:15:52	-- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:05:35.234   06:15:52	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:05:35.234    06:15:52	-- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:05:35.234   06:15:52	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:05:35.234   06:15:52	-- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:05:35.234   06:15:52	-- common/autotest_common.sh@561 -- # xtrace_disable
00:05:35.234   06:15:52	-- common/autotest_common.sh@10 -- # set +x
00:05:35.234  [2024-12-16 06:15:52.113661] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57940 has claimed it.
00:05:35.234  2024/12/16 06:15:52 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2
00:05:35.234  request:
00:05:35.234  {
00:05:35.234  "method": "framework_enable_cpumask_locks",
00:05:35.234  "params": {}
00:05:35.234  }
00:05:35.234  Got JSON-RPC error response
00:05:35.234  GoRPCClient: error on JSON-RPC call
00:05:35.234   06:15:52	-- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:05:35.234  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:35.234   06:15:52	-- common/autotest_common.sh@653 -- # es=1
00:05:35.234   06:15:52	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:05:35.234   06:15:52	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:05:35.234   06:15:52	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:05:35.234   06:15:52	-- event/cpu_locks.sh@158 -- # waitforlisten 57940 /var/tmp/spdk.sock
00:05:35.234   06:15:52	-- common/autotest_common.sh@829 -- # '[' -z 57940 ']'
00:05:35.234   06:15:52	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:35.234   06:15:52	-- common/autotest_common.sh@834 -- # local max_retries=100
00:05:35.234   06:15:52	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:35.234   06:15:52	-- common/autotest_common.sh@838 -- # xtrace_disable
00:05:35.235   06:15:52	-- common/autotest_common.sh@10 -- # set +x
00:05:35.493   06:15:52	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:05:35.493  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:35.493   06:15:52	-- common/autotest_common.sh@862 -- # return 0
00:05:35.493   06:15:52	-- event/cpu_locks.sh@159 -- # waitforlisten 57970 /var/tmp/spdk2.sock
00:05:35.493   06:15:52	-- common/autotest_common.sh@829 -- # '[' -z 57970 ']'
00:05:35.493   06:15:52	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:35.493   06:15:52	-- common/autotest_common.sh@834 -- # local max_retries=100
00:05:35.493   06:15:52	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:35.493   06:15:52	-- common/autotest_common.sh@838 -- # xtrace_disable
00:05:35.493   06:15:52	-- common/autotest_common.sh@10 -- # set +x
00:05:35.753  ************************************
00:05:35.753  END TEST locking_overlapped_coremask_via_rpc
00:05:35.753  ************************************
00:05:35.753   06:15:52	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:05:35.753   06:15:52	-- common/autotest_common.sh@862 -- # return 0
00:05:35.753   06:15:52	-- event/cpu_locks.sh@161 -- # check_remaining_locks
00:05:35.753   06:15:52	-- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:05:35.753   06:15:52	-- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:05:35.753   06:15:52	-- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:05:35.753  
00:05:35.753  real	0m2.607s
00:05:35.753  user	0m1.317s
00:05:35.753  sys	0m0.223s
00:05:35.753   06:15:52	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:35.753   06:15:52	-- common/autotest_common.sh@10 -- # set +x
00:05:35.753   06:15:52	-- event/cpu_locks.sh@174 -- # cleanup
00:05:35.753   06:15:52	-- event/cpu_locks.sh@15 -- # [[ -z 57940 ]]
00:05:35.753   06:15:52	-- event/cpu_locks.sh@15 -- # killprocess 57940
00:05:35.753   06:15:52	-- common/autotest_common.sh@936 -- # '[' -z 57940 ']'
00:05:35.753   06:15:52	-- common/autotest_common.sh@940 -- # kill -0 57940
00:05:35.753    06:15:52	-- common/autotest_common.sh@941 -- # uname
00:05:35.753   06:15:52	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:05:35.753    06:15:52	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57940
00:05:35.753  killing process with pid 57940
00:05:35.753   06:15:52	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:05:35.753   06:15:52	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:05:35.753   06:15:52	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 57940'
00:05:35.753   06:15:52	-- common/autotest_common.sh@955 -- # kill 57940
00:05:35.753   06:15:52	-- common/autotest_common.sh@960 -- # wait 57940
00:05:36.319   06:15:53	-- event/cpu_locks.sh@16 -- # [[ -z 57970 ]]
00:05:36.319   06:15:53	-- event/cpu_locks.sh@16 -- # killprocess 57970
00:05:36.319   06:15:53	-- common/autotest_common.sh@936 -- # '[' -z 57970 ']'
00:05:36.319   06:15:53	-- common/autotest_common.sh@940 -- # kill -0 57970
00:05:36.319    06:15:53	-- common/autotest_common.sh@941 -- # uname
00:05:36.319   06:15:53	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:05:36.319    06:15:53	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57970
00:05:36.319  killing process with pid 57970
00:05:36.319   06:15:53	-- common/autotest_common.sh@942 -- # process_name=reactor_2
00:05:36.319   06:15:53	-- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']'
00:05:36.319   06:15:53	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 57970'
00:05:36.319   06:15:53	-- common/autotest_common.sh@955 -- # kill 57970
00:05:36.319   06:15:53	-- common/autotest_common.sh@960 -- # wait 57970
00:05:36.577   06:15:53	-- event/cpu_locks.sh@18 -- # rm -f
00:05:36.577   06:15:53	-- event/cpu_locks.sh@1 -- # cleanup
00:05:36.577   06:15:53	-- event/cpu_locks.sh@15 -- # [[ -z 57940 ]]
00:05:36.577   06:15:53	-- event/cpu_locks.sh@15 -- # killprocess 57940
00:05:36.577   06:15:53	-- common/autotest_common.sh@936 -- # '[' -z 57940 ']'
00:05:36.577   06:15:53	-- common/autotest_common.sh@940 -- # kill -0 57940
00:05:36.577  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (57940) - No such process
00:05:36.577  Process with pid 57940 is not found
00:05:36.577   06:15:53	-- common/autotest_common.sh@963 -- # echo 'Process with pid 57940 is not found'
00:05:36.577   06:15:53	-- event/cpu_locks.sh@16 -- # [[ -z 57970 ]]
00:05:36.577   06:15:53	-- event/cpu_locks.sh@16 -- # killprocess 57970
00:05:36.577   06:15:53	-- common/autotest_common.sh@936 -- # '[' -z 57970 ']'
00:05:36.577   06:15:53	-- common/autotest_common.sh@940 -- # kill -0 57970
00:05:36.577  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (57970) - No such process
00:05:36.577  Process with pid 57970 is not found
00:05:36.577   06:15:53	-- common/autotest_common.sh@963 -- # echo 'Process with pid 57970 is not found'
00:05:36.577   06:15:53	-- event/cpu_locks.sh@18 -- # rm -f
00:05:36.577  
00:05:36.577  real	0m20.375s
00:05:36.577  user	0m36.408s
00:05:36.577  sys	0m5.114s
00:05:36.577   06:15:53	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:36.577  ************************************
00:05:36.577  END TEST cpu_locks
00:05:36.577  ************************************
00:05:36.577   06:15:53	-- common/autotest_common.sh@10 -- # set +x
00:05:36.835  
00:05:36.835  real	0m48.500s
00:05:36.835  user	1m34.769s
00:05:36.835  sys	0m8.796s
00:05:36.835   06:15:53	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:36.835  ************************************
00:05:36.836  END TEST event
00:05:36.836  ************************************
00:05:36.836   06:15:53	-- common/autotest_common.sh@10 -- # set +x
00:05:36.836   06:15:53	-- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh
00:05:36.836   06:15:53	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:36.836   06:15:53	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:36.836   06:15:53	-- common/autotest_common.sh@10 -- # set +x
00:05:36.836  ************************************
00:05:36.836  START TEST thread
00:05:36.836  ************************************
00:05:36.836   06:15:53	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh
00:05:36.836  * Looking for test storage...
00:05:36.836  * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread
00:05:36.836    06:15:53	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:05:36.836     06:15:53	-- common/autotest_common.sh@1690 -- # lcov --version
00:05:36.836     06:15:53	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:05:36.836    06:15:53	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:05:36.836    06:15:53	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:05:36.836    06:15:53	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:05:36.836    06:15:53	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:05:36.836    06:15:53	-- scripts/common.sh@335 -- # IFS=.-:
00:05:36.836    06:15:53	-- scripts/common.sh@335 -- # read -ra ver1
00:05:36.836    06:15:53	-- scripts/common.sh@336 -- # IFS=.-:
00:05:36.836    06:15:53	-- scripts/common.sh@336 -- # read -ra ver2
00:05:36.836    06:15:53	-- scripts/common.sh@337 -- # local 'op=<'
00:05:36.836    06:15:53	-- scripts/common.sh@339 -- # ver1_l=2
00:05:36.836    06:15:53	-- scripts/common.sh@340 -- # ver2_l=1
00:05:36.836    06:15:53	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:05:36.836    06:15:53	-- scripts/common.sh@343 -- # case "$op" in
00:05:36.836    06:15:53	-- scripts/common.sh@344 -- # : 1
00:05:36.836    06:15:53	-- scripts/common.sh@363 -- # (( v = 0 ))
00:05:36.836    06:15:53	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:36.836     06:15:53	-- scripts/common.sh@364 -- # decimal 1
00:05:36.836     06:15:53	-- scripts/common.sh@352 -- # local d=1
00:05:36.836     06:15:53	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:36.836     06:15:53	-- scripts/common.sh@354 -- # echo 1
00:05:36.836    06:15:53	-- scripts/common.sh@364 -- # ver1[v]=1
00:05:36.836     06:15:53	-- scripts/common.sh@365 -- # decimal 2
00:05:36.836     06:15:53	-- scripts/common.sh@352 -- # local d=2
00:05:36.836     06:15:53	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:36.836     06:15:53	-- scripts/common.sh@354 -- # echo 2
00:05:36.836    06:15:53	-- scripts/common.sh@365 -- # ver2[v]=2
00:05:36.836    06:15:53	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:05:36.836    06:15:53	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:05:36.836    06:15:53	-- scripts/common.sh@367 -- # return 0
00:05:36.836    06:15:53	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:36.836    06:15:53	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:05:36.836  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:36.836  		--rc genhtml_branch_coverage=1
00:05:36.836  		--rc genhtml_function_coverage=1
00:05:36.836  		--rc genhtml_legend=1
00:05:36.836  		--rc geninfo_all_blocks=1
00:05:36.836  		--rc geninfo_unexecuted_blocks=1
00:05:36.836  		
00:05:36.836  		'
00:05:36.836    06:15:53	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:05:36.836  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:36.836  		--rc genhtml_branch_coverage=1
00:05:36.836  		--rc genhtml_function_coverage=1
00:05:36.836  		--rc genhtml_legend=1
00:05:36.836  		--rc geninfo_all_blocks=1
00:05:36.836  		--rc geninfo_unexecuted_blocks=1
00:05:36.836  		
00:05:36.836  		'
00:05:36.836    06:15:53	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:05:36.836  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:36.836  		--rc genhtml_branch_coverage=1
00:05:36.836  		--rc genhtml_function_coverage=1
00:05:36.836  		--rc genhtml_legend=1
00:05:36.836  		--rc geninfo_all_blocks=1
00:05:36.836  		--rc geninfo_unexecuted_blocks=1
00:05:36.836  		
00:05:36.836  		'
00:05:36.836    06:15:53	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:05:36.836  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:36.836  		--rc genhtml_branch_coverage=1
00:05:36.836  		--rc genhtml_function_coverage=1
00:05:36.836  		--rc genhtml_legend=1
00:05:36.836  		--rc geninfo_all_blocks=1
00:05:36.836  		--rc geninfo_unexecuted_blocks=1
00:05:36.836  		
00:05:36.836  		'
00:05:36.836   06:15:53	-- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:05:36.836   06:15:53	-- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']'
00:05:36.836   06:15:53	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:36.836   06:15:53	-- common/autotest_common.sh@10 -- # set +x
00:05:36.836  ************************************
00:05:36.836  START TEST thread_poller_perf
00:05:36.836  ************************************
00:05:36.836   06:15:53	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:05:36.836  [2024-12-16 06:15:53.802439] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:36.836  [2024-12-16 06:15:53.802571] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58124 ]
00:05:37.095  [2024-12-16 06:15:53.942906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:37.095  [2024-12-16 06:15:54.031660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:37.095  Running 1000 pollers for 1 seconds with 1 microseconds period.
00:05:38.470  
[2024-12-16T06:15:55.446Z]  ======================================
00:05:38.470  
[2024-12-16T06:15:55.446Z]  busy:2213968564 (cyc)
00:05:38.470  
[2024-12-16T06:15:55.446Z]  total_run_count: 371000
00:05:38.470  
[2024-12-16T06:15:55.446Z]  tsc_hz: 2200000000 (cyc)
00:05:38.470  
[2024-12-16T06:15:55.446Z]  ======================================
00:05:38.470  
[2024-12-16T06:15:55.446Z]  poller_cost: 5967 (cyc), 2712 (nsec)
00:05:38.470  
00:05:38.470  real	0m1.342s
00:05:38.470  user	0m1.181s
00:05:38.470  sys	0m0.052s
00:05:38.470   06:15:55	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:38.470  ************************************
00:05:38.470  END TEST thread_poller_perf
00:05:38.470  ************************************
00:05:38.470   06:15:55	-- common/autotest_common.sh@10 -- # set +x
00:05:38.470   06:15:55	-- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:05:38.470   06:15:55	-- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']'
00:05:38.470   06:15:55	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:38.470   06:15:55	-- common/autotest_common.sh@10 -- # set +x
00:05:38.470  ************************************
00:05:38.470  START TEST thread_poller_perf
00:05:38.470  ************************************
00:05:38.470   06:15:55	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:05:38.470  [2024-12-16 06:15:55.189451] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:38.470  [2024-12-16 06:15:55.189567] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58165 ]
00:05:38.470  [2024-12-16 06:15:55.317381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:38.470  [2024-12-16 06:15:55.380107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:38.470  Running 1000 pollers for 1 seconds with 0 microseconds period.
00:05:39.846  
[2024-12-16T06:15:56.822Z]  ======================================
00:05:39.846  
[2024-12-16T06:15:56.822Z]  busy:2202775540 (cyc)
00:05:39.846  
[2024-12-16T06:15:56.822Z]  total_run_count: 5228000
00:05:39.846  
[2024-12-16T06:15:56.822Z]  tsc_hz: 2200000000 (cyc)
00:05:39.846  
[2024-12-16T06:15:56.822Z]  ======================================
00:05:39.846  
[2024-12-16T06:15:56.822Z]  poller_cost: 421 (cyc), 191 (nsec)
00:05:39.846  
00:05:39.846  real	0m1.291s
00:05:39.846  user	0m1.136s
00:05:39.846  sys	0m0.049s
00:05:39.846   06:15:56	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:39.846  ************************************
00:05:39.846  END TEST thread_poller_perf
00:05:39.846  ************************************
00:05:39.846   06:15:56	-- common/autotest_common.sh@10 -- # set +x
00:05:39.846   06:15:56	-- thread/thread.sh@17 -- # [[ y != \y ]]
00:05:39.846  
00:05:39.846  real	0m2.898s
00:05:39.846  user	0m2.438s
00:05:39.846  sys	0m0.244s
00:05:39.846   06:15:56	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:39.846  ************************************
00:05:39.846  END TEST thread
00:05:39.846  ************************************
00:05:39.846   06:15:56	-- common/autotest_common.sh@10 -- # set +x
00:05:39.846   06:15:56	-- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh
00:05:39.846   06:15:56	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:39.846   06:15:56	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:39.846   06:15:56	-- common/autotest_common.sh@10 -- # set +x
00:05:39.846  ************************************
00:05:39.846  START TEST accel
00:05:39.846  ************************************
00:05:39.846   06:15:56	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh
00:05:39.846  * Looking for test storage...
00:05:39.846  * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel
00:05:39.846    06:15:56	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:05:39.846     06:15:56	-- common/autotest_common.sh@1690 -- # lcov --version
00:05:39.846     06:15:56	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:05:39.846    06:15:56	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:05:39.846    06:15:56	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:05:39.846    06:15:56	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:05:39.846    06:15:56	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:05:39.846    06:15:56	-- scripts/common.sh@335 -- # IFS=.-:
00:05:39.846    06:15:56	-- scripts/common.sh@335 -- # read -ra ver1
00:05:39.846    06:15:56	-- scripts/common.sh@336 -- # IFS=.-:
00:05:39.846    06:15:56	-- scripts/common.sh@336 -- # read -ra ver2
00:05:39.846    06:15:56	-- scripts/common.sh@337 -- # local 'op=<'
00:05:39.846    06:15:56	-- scripts/common.sh@339 -- # ver1_l=2
00:05:39.846    06:15:56	-- scripts/common.sh@340 -- # ver2_l=1
00:05:39.846    06:15:56	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:05:39.846    06:15:56	-- scripts/common.sh@343 -- # case "$op" in
00:05:39.846    06:15:56	-- scripts/common.sh@344 -- # : 1
00:05:39.846    06:15:56	-- scripts/common.sh@363 -- # (( v = 0 ))
00:05:39.846    06:15:56	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:39.846     06:15:56	-- scripts/common.sh@364 -- # decimal 1
00:05:39.846     06:15:56	-- scripts/common.sh@352 -- # local d=1
00:05:39.846     06:15:56	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:39.846     06:15:56	-- scripts/common.sh@354 -- # echo 1
00:05:39.846    06:15:56	-- scripts/common.sh@364 -- # ver1[v]=1
00:05:39.846     06:15:56	-- scripts/common.sh@365 -- # decimal 2
00:05:39.846     06:15:56	-- scripts/common.sh@352 -- # local d=2
00:05:39.846     06:15:56	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:39.846     06:15:56	-- scripts/common.sh@354 -- # echo 2
00:05:39.846    06:15:56	-- scripts/common.sh@365 -- # ver2[v]=2
00:05:39.846    06:15:56	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:05:39.846    06:15:56	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:05:39.846    06:15:56	-- scripts/common.sh@367 -- # return 0
00:05:39.846    06:15:56	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:39.846    06:15:56	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:05:39.846  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:39.846  		--rc genhtml_branch_coverage=1
00:05:39.846  		--rc genhtml_function_coverage=1
00:05:39.846  		--rc genhtml_legend=1
00:05:39.846  		--rc geninfo_all_blocks=1
00:05:39.846  		--rc geninfo_unexecuted_blocks=1
00:05:39.846  		
00:05:39.846  		'
00:05:39.846    06:15:56	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:05:39.846  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:39.846  		--rc genhtml_branch_coverage=1
00:05:39.846  		--rc genhtml_function_coverage=1
00:05:39.846  		--rc genhtml_legend=1
00:05:39.846  		--rc geninfo_all_blocks=1
00:05:39.846  		--rc geninfo_unexecuted_blocks=1
00:05:39.846  		
00:05:39.846  		'
00:05:39.846    06:15:56	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:05:39.846  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:39.846  		--rc genhtml_branch_coverage=1
00:05:39.846  		--rc genhtml_function_coverage=1
00:05:39.846  		--rc genhtml_legend=1
00:05:39.846  		--rc geninfo_all_blocks=1
00:05:39.846  		--rc geninfo_unexecuted_blocks=1
00:05:39.846  		
00:05:39.846  		'
00:05:39.846    06:15:56	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:05:39.846  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:39.846  		--rc genhtml_branch_coverage=1
00:05:39.846  		--rc genhtml_function_coverage=1
00:05:39.846  		--rc genhtml_legend=1
00:05:39.846  		--rc geninfo_all_blocks=1
00:05:39.846  		--rc geninfo_unexecuted_blocks=1
00:05:39.846  		
00:05:39.846  		'
00:05:39.846   06:15:56	-- accel/accel.sh@73 -- # declare -A expected_opcs
00:05:39.847   06:15:56	-- accel/accel.sh@74 -- # get_expected_opcs
00:05:39.847   06:15:56	-- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:05:39.847   06:15:56	-- accel/accel.sh@59 -- # spdk_tgt_pid=58241
00:05:39.847   06:15:56	-- accel/accel.sh@60 -- # waitforlisten 58241
00:05:39.847   06:15:56	-- common/autotest_common.sh@829 -- # '[' -z 58241 ']'
00:05:39.847   06:15:56	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:39.847   06:15:56	-- common/autotest_common.sh@834 -- # local max_retries=100
00:05:39.847   06:15:56	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:39.847  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:39.847   06:15:56	-- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63
00:05:39.847   06:15:56	-- common/autotest_common.sh@838 -- # xtrace_disable
00:05:39.847   06:15:56	-- common/autotest_common.sh@10 -- # set +x
00:05:39.847    06:15:56	-- accel/accel.sh@58 -- # build_accel_config
00:05:39.847    06:15:56	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:05:39.847    06:15:56	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:05:39.847    06:15:56	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:05:39.847    06:15:56	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:05:39.847    06:15:56	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:05:39.847    06:15:56	-- accel/accel.sh@41 -- # local IFS=,
00:05:39.847    06:15:56	-- accel/accel.sh@42 -- # jq -r .
00:05:39.847  [2024-12-16 06:15:56.799891] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:39.847  [2024-12-16 06:15:56.800001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58241 ]
00:05:40.109  [2024-12-16 06:15:56.929795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:40.109  [2024-12-16 06:15:57.024012] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:05:40.109  [2024-12-16 06:15:57.024184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:41.077   06:15:57	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:05:41.077   06:15:57	-- common/autotest_common.sh@862 -- # return 0
00:05:41.077   06:15:57	-- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]"))
00:05:41.077    06:15:57	-- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments
00:05:41.077    06:15:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:05:41.077    06:15:57	-- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]'
00:05:41.077    06:15:57	-- common/autotest_common.sh@10 -- # set +x
00:05:41.077    06:15:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:05:41.077   06:15:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:05:41.077   06:15:57	-- accel/accel.sh@64 -- # IFS==
00:05:41.077   06:15:57	-- accel/accel.sh@64 -- # read -r opc module
00:05:41.077   06:15:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:05:41.077   06:15:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:05:41.077   06:15:57	-- accel/accel.sh@64 -- # IFS==
00:05:41.077   06:15:57	-- accel/accel.sh@64 -- # read -r opc module
00:05:41.077   06:15:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:05:41.077   06:15:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:05:41.077   06:15:57	-- accel/accel.sh@64 -- # IFS==
00:05:41.077   06:15:57	-- accel/accel.sh@64 -- # read -r opc module
00:05:41.077   06:15:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:05:41.077   06:15:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:05:41.077   06:15:57	-- accel/accel.sh@64 -- # IFS==
00:05:41.077   06:15:57	-- accel/accel.sh@64 -- # read -r opc module
00:05:41.077   06:15:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:05:41.077   06:15:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:05:41.077   06:15:57	-- accel/accel.sh@64 -- # IFS==
00:05:41.077   06:15:57	-- accel/accel.sh@64 -- # read -r opc module
00:05:41.077   06:15:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:05:41.077   06:15:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:05:41.077   06:15:57	-- accel/accel.sh@64 -- # IFS==
00:05:41.077   06:15:57	-- accel/accel.sh@64 -- # read -r opc module
00:05:41.077   06:15:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:05:41.077   06:15:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:05:41.077   06:15:57	-- accel/accel.sh@64 -- # IFS==
00:05:41.077   06:15:57	-- accel/accel.sh@64 -- # read -r opc module
00:05:41.077   06:15:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:05:41.077   06:15:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:05:41.077   06:15:57	-- accel/accel.sh@64 -- # IFS==
00:05:41.077   06:15:57	-- accel/accel.sh@64 -- # read -r opc module
00:05:41.077   06:15:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:05:41.077   06:15:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:05:41.077   06:15:57	-- accel/accel.sh@64 -- # IFS==
00:05:41.077   06:15:57	-- accel/accel.sh@64 -- # read -r opc module
00:05:41.077   06:15:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:05:41.077   06:15:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:05:41.077   06:15:57	-- accel/accel.sh@64 -- # IFS==
00:05:41.077   06:15:57	-- accel/accel.sh@64 -- # read -r opc module
00:05:41.077   06:15:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:05:41.077   06:15:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:05:41.077   06:15:57	-- accel/accel.sh@64 -- # IFS==
00:05:41.077   06:15:57	-- accel/accel.sh@64 -- # read -r opc module
00:05:41.077   06:15:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:05:41.077   06:15:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:05:41.077   06:15:57	-- accel/accel.sh@64 -- # IFS==
00:05:41.077   06:15:57	-- accel/accel.sh@64 -- # read -r opc module
00:05:41.077   06:15:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:05:41.077   06:15:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:05:41.078   06:15:57	-- accel/accel.sh@64 -- # IFS==
00:05:41.078   06:15:57	-- accel/accel.sh@64 -- # read -r opc module
00:05:41.078   06:15:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:05:41.078   06:15:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:05:41.078   06:15:57	-- accel/accel.sh@64 -- # IFS==
00:05:41.078   06:15:57	-- accel/accel.sh@64 -- # read -r opc module
00:05:41.078   06:15:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:05:41.078   06:15:57	-- accel/accel.sh@67 -- # killprocess 58241
00:05:41.078   06:15:57	-- common/autotest_common.sh@936 -- # '[' -z 58241 ']'
00:05:41.078   06:15:57	-- common/autotest_common.sh@940 -- # kill -0 58241
00:05:41.078    06:15:57	-- common/autotest_common.sh@941 -- # uname
00:05:41.078   06:15:57	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:05:41.078    06:15:57	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58241
00:05:41.078   06:15:57	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:05:41.078  killing process with pid 58241
00:05:41.078   06:15:57	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:05:41.078   06:15:57	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 58241'
00:05:41.078   06:15:57	-- common/autotest_common.sh@955 -- # kill 58241
00:05:41.078   06:15:57	-- common/autotest_common.sh@960 -- # wait 58241
00:05:41.336   06:15:58	-- accel/accel.sh@68 -- # trap - ERR
00:05:41.336   06:15:58	-- accel/accel.sh@81 -- # run_test accel_help accel_perf -h
00:05:41.336   06:15:58	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:05:41.336   06:15:58	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:41.336   06:15:58	-- common/autotest_common.sh@10 -- # set +x
00:05:41.336   06:15:58	-- common/autotest_common.sh@1114 -- # accel_perf -h
00:05:41.336   06:15:58	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h
00:05:41.336    06:15:58	-- accel/accel.sh@12 -- # build_accel_config
00:05:41.336    06:15:58	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:05:41.336    06:15:58	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:05:41.336    06:15:58	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:05:41.336    06:15:58	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:05:41.336    06:15:58	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:05:41.336    06:15:58	-- accel/accel.sh@41 -- # local IFS=,
00:05:41.336    06:15:58	-- accel/accel.sh@42 -- # jq -r .
00:05:41.337   06:15:58	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:41.337   06:15:58	-- common/autotest_common.sh@10 -- # set +x
00:05:41.337   06:15:58	-- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress
00:05:41.337   06:15:58	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:05:41.337   06:15:58	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:41.337   06:15:58	-- common/autotest_common.sh@10 -- # set +x
00:05:41.337  ************************************
00:05:41.337  START TEST accel_missing_filename
00:05:41.337  ************************************
00:05:41.337   06:15:58	-- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress
00:05:41.337   06:15:58	-- common/autotest_common.sh@650 -- # local es=0
00:05:41.337   06:15:58	-- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress
00:05:41.337   06:15:58	-- common/autotest_common.sh@638 -- # local arg=accel_perf
00:05:41.337   06:15:58	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:05:41.337    06:15:58	-- common/autotest_common.sh@642 -- # type -t accel_perf
00:05:41.337   06:15:58	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:05:41.337   06:15:58	-- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress
00:05:41.337   06:15:58	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress
00:05:41.337    06:15:58	-- accel/accel.sh@12 -- # build_accel_config
00:05:41.337    06:15:58	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:05:41.337    06:15:58	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:05:41.337    06:15:58	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:05:41.337    06:15:58	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:05:41.337    06:15:58	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:05:41.337    06:15:58	-- accel/accel.sh@41 -- # local IFS=,
00:05:41.337    06:15:58	-- accel/accel.sh@42 -- # jq -r .
00:05:41.337  [2024-12-16 06:15:58.308713] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:41.337  [2024-12-16 06:15:58.308815] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58316 ]
00:05:41.595  [2024-12-16 06:15:58.442870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:41.595  [2024-12-16 06:15:58.516114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:41.595  [2024-12-16 06:15:58.569231] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:05:41.854  [2024-12-16 06:15:58.641268] accel_perf.c:1385:main: *ERROR*: ERROR starting application
00:05:41.854  A filename is required.
00:05:41.854   06:15:58	-- common/autotest_common.sh@653 -- # es=234
00:05:41.854   06:15:58	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:05:41.854   06:15:58	-- common/autotest_common.sh@662 -- # es=106
00:05:41.854   06:15:58	-- common/autotest_common.sh@663 -- # case "$es" in
00:05:41.854   06:15:58	-- common/autotest_common.sh@670 -- # es=1
00:05:41.854   06:15:58	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:05:41.854  
00:05:41.854  real	0m0.450s
00:05:41.854  user	0m0.295s
00:05:41.854  sys	0m0.101s
00:05:41.854   06:15:58	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:41.854   06:15:58	-- common/autotest_common.sh@10 -- # set +x
00:05:41.854  ************************************
00:05:41.854  END TEST accel_missing_filename
00:05:41.854  ************************************
00:05:41.854   06:15:58	-- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:05:41.854   06:15:58	-- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']'
00:05:41.854   06:15:58	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:41.854   06:15:58	-- common/autotest_common.sh@10 -- # set +x
00:05:41.854  ************************************
00:05:41.854  START TEST accel_compress_verify
00:05:41.854  ************************************
00:05:41.854   06:15:58	-- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:05:41.854   06:15:58	-- common/autotest_common.sh@650 -- # local es=0
00:05:41.854   06:15:58	-- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:05:41.854   06:15:58	-- common/autotest_common.sh@638 -- # local arg=accel_perf
00:05:41.854   06:15:58	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:05:41.854    06:15:58	-- common/autotest_common.sh@642 -- # type -t accel_perf
00:05:41.854   06:15:58	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:05:41.854   06:15:58	-- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:05:41.854   06:15:58	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:05:41.854    06:15:58	-- accel/accel.sh@12 -- # build_accel_config
00:05:41.854    06:15:58	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:05:41.854    06:15:58	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:05:41.854    06:15:58	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:05:41.854    06:15:58	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:05:41.854    06:15:58	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:05:41.854    06:15:58	-- accel/accel.sh@41 -- # local IFS=,
00:05:41.854    06:15:58	-- accel/accel.sh@42 -- # jq -r .
00:05:41.854  [2024-12-16 06:15:58.810031] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:41.854  [2024-12-16 06:15:58.810144] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58335 ]
00:05:42.113  [2024-12-16 06:15:58.947031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:42.113  [2024-12-16 06:15:59.024850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:42.113  [2024-12-16 06:15:59.080812] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:05:42.371  [2024-12-16 06:15:59.154030] accel_perf.c:1385:main: *ERROR*: ERROR starting application
00:05:42.371  
00:05:42.371  Compression does not support the verify option, aborting.
00:05:42.371   06:15:59	-- common/autotest_common.sh@653 -- # es=161
00:05:42.371   06:15:59	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:05:42.371   06:15:59	-- common/autotest_common.sh@662 -- # es=33
00:05:42.371   06:15:59	-- common/autotest_common.sh@663 -- # case "$es" in
00:05:42.371   06:15:59	-- common/autotest_common.sh@670 -- # es=1
00:05:42.371   06:15:59	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:05:42.371  
00:05:42.371  real	0m0.460s
00:05:42.371  user	0m0.290s
00:05:42.371  sys	0m0.114s
00:05:42.371   06:15:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:42.371   06:15:59	-- common/autotest_common.sh@10 -- # set +x
00:05:42.371  ************************************
00:05:42.371  END TEST accel_compress_verify
00:05:42.371  ************************************
00:05:42.371   06:15:59	-- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar
00:05:42.371   06:15:59	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:05:42.371   06:15:59	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:42.371   06:15:59	-- common/autotest_common.sh@10 -- # set +x
00:05:42.371  ************************************
00:05:42.371  START TEST accel_wrong_workload
00:05:42.371  ************************************
00:05:42.371   06:15:59	-- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar
00:05:42.371   06:15:59	-- common/autotest_common.sh@650 -- # local es=0
00:05:42.371   06:15:59	-- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar
00:05:42.371   06:15:59	-- common/autotest_common.sh@638 -- # local arg=accel_perf
00:05:42.371   06:15:59	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:05:42.371    06:15:59	-- common/autotest_common.sh@642 -- # type -t accel_perf
00:05:42.371   06:15:59	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:05:42.371   06:15:59	-- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar
00:05:42.371   06:15:59	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar
00:05:42.371    06:15:59	-- accel/accel.sh@12 -- # build_accel_config
00:05:42.371    06:15:59	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:05:42.371    06:15:59	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:05:42.371    06:15:59	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:05:42.371    06:15:59	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:05:42.371    06:15:59	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:05:42.371    06:15:59	-- accel/accel.sh@41 -- # local IFS=,
00:05:42.371    06:15:59	-- accel/accel.sh@42 -- # jq -r .
00:05:42.371  Unsupported workload type: foobar
00:05:42.371  [2024-12-16 06:15:59.318895] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1
00:05:42.371  accel_perf options:
00:05:42.371  	[-h help message]
00:05:42.371  	[-q queue depth per core]
00:05:42.371  	[-C for supported workloads, use this value to configure the io vector size to test (default 1)
00:05:42.371  	[-T number of threads per core
00:05:42.371  	[-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)]
00:05:42.371  	[-t time in seconds]
00:05:42.371  	[-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor,
00:05:42.371  	[                                       dif_verify, , dif_generate, dif_generate_copy
00:05:42.371  	[-M assign module to the operation, not compatible with accel_assign_opc RPC
00:05:42.371  	[-l for compress/decompress workloads, name of uncompressed input file
00:05:42.371  	[-S for crc32c workload, use this seed value (default 0)
00:05:42.371  	[-P for compare workload, percentage of operations that should miscompare (percent, default 0)
00:05:42.371  	[-f for fill workload, use this BYTE value (default 255)
00:05:42.371  	[-x for xor workload, use this number of source buffers (default, minimum: 2)]
00:05:42.371  	[-y verify result if this switch is on]
00:05:42.371  	[-a tasks to allocate per core (default: same value as -q)]
00:05:42.371  		Can be used to spread operations across a wider range of memory.
00:05:42.371   06:15:59	-- common/autotest_common.sh@653 -- # es=1
00:05:42.371   06:15:59	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:05:42.371   06:15:59	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:05:42.371   06:15:59	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:05:42.372  
00:05:42.372  real	0m0.033s
00:05:42.372  user	0m0.020s
00:05:42.372  sys	0m0.013s
00:05:42.372   06:15:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:42.372  ************************************
00:05:42.372  END TEST accel_wrong_workload
00:05:42.372  ************************************
00:05:42.372   06:15:59	-- common/autotest_common.sh@10 -- # set +x
00:05:42.630   06:15:59	-- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1
00:05:42.630   06:15:59	-- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']'
00:05:42.630   06:15:59	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:42.630   06:15:59	-- common/autotest_common.sh@10 -- # set +x
00:05:42.630  ************************************
00:05:42.630  START TEST accel_negative_buffers
00:05:42.630  ************************************
00:05:42.630   06:15:59	-- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1
00:05:42.630   06:15:59	-- common/autotest_common.sh@650 -- # local es=0
00:05:42.630   06:15:59	-- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1
00:05:42.630   06:15:59	-- common/autotest_common.sh@638 -- # local arg=accel_perf
00:05:42.630   06:15:59	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:05:42.630    06:15:59	-- common/autotest_common.sh@642 -- # type -t accel_perf
00:05:42.631   06:15:59	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:05:42.631   06:15:59	-- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1
00:05:42.631   06:15:59	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1
00:05:42.631    06:15:59	-- accel/accel.sh@12 -- # build_accel_config
00:05:42.631    06:15:59	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:05:42.631    06:15:59	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:05:42.631    06:15:59	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:05:42.631    06:15:59	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:05:42.631    06:15:59	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:05:42.631    06:15:59	-- accel/accel.sh@41 -- # local IFS=,
00:05:42.631    06:15:59	-- accel/accel.sh@42 -- # jq -r .
00:05:42.631  -x option must be non-negative.
00:05:42.631  [2024-12-16 06:15:59.397205] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1
00:05:42.631  accel_perf options:
00:05:42.631  	[-h help message]
00:05:42.631  	[-q queue depth per core]
00:05:42.631  	[-C for supported workloads, use this value to configure the io vector size to test (default 1)
00:05:42.631  	[-T number of threads per core
00:05:42.631  	[-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)]
00:05:42.631  	[-t time in seconds]
00:05:42.631  	[-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor,
00:05:42.631  	[                                       dif_verify, , dif_generate, dif_generate_copy
00:05:42.631  	[-M assign module to the operation, not compatible with accel_assign_opc RPC
00:05:42.631  	[-l for compress/decompress workloads, name of uncompressed input file
00:05:42.631  	[-S for crc32c workload, use this seed value (default 0)
00:05:42.631  	[-P for compare workload, percentage of operations that should miscompare (percent, default 0)
00:05:42.631  	[-f for fill workload, use this BYTE value (default 255)
00:05:42.631  	[-x for xor workload, use this number of source buffers (default, minimum: 2)]
00:05:42.631  	[-y verify result if this switch is on]
00:05:42.631  	[-a tasks to allocate per core (default: same value as -q)]
00:05:42.631  		Can be used to spread operations across a wider range of memory.
00:05:42.631   06:15:59	-- common/autotest_common.sh@653 -- # es=1
00:05:42.631   06:15:59	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:05:42.631   06:15:59	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:05:42.631   06:15:59	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:05:42.631  
00:05:42.631  real	0m0.029s
00:05:42.631  user	0m0.017s
00:05:42.631  sys	0m0.012s
00:05:42.631   06:15:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:42.631  ************************************
00:05:42.631  END TEST accel_negative_buffers
00:05:42.631   06:15:59	-- common/autotest_common.sh@10 -- # set +x
00:05:42.631  ************************************
00:05:42.631   06:15:59	-- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y
00:05:42.631   06:15:59	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:05:42.631   06:15:59	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:42.631   06:15:59	-- common/autotest_common.sh@10 -- # set +x
00:05:42.631  ************************************
00:05:42.631  START TEST accel_crc32c
00:05:42.631  ************************************
00:05:42.631   06:15:59	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y
00:05:42.631   06:15:59	-- accel/accel.sh@16 -- # local accel_opc
00:05:42.631   06:15:59	-- accel/accel.sh@17 -- # local accel_module
00:05:42.631    06:15:59	-- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y
00:05:42.631    06:15:59	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y
00:05:42.631     06:15:59	-- accel/accel.sh@12 -- # build_accel_config
00:05:42.631     06:15:59	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:05:42.631     06:15:59	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:05:42.631     06:15:59	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:05:42.631     06:15:59	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:05:42.631     06:15:59	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:05:42.631     06:15:59	-- accel/accel.sh@41 -- # local IFS=,
00:05:42.631     06:15:59	-- accel/accel.sh@42 -- # jq -r .
00:05:42.631  [2024-12-16 06:15:59.476806] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:42.631  [2024-12-16 06:15:59.476915] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58399 ]
00:05:42.890  [2024-12-16 06:15:59.611265] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:42.890  [2024-12-16 06:15:59.683905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:44.265   06:16:00	-- accel/accel.sh@18 -- # out='
00:05:44.265  SPDK Configuration:
00:05:44.265  Core mask:      0x1
00:05:44.265  
00:05:44.265  Accel Perf Configuration:
00:05:44.265  Workload Type:  crc32c
00:05:44.265  CRC-32C seed:   32
00:05:44.265  Transfer size:  4096 bytes
00:05:44.265  Vector count    1
00:05:44.265  Module:         software
00:05:44.265  Queue depth:    32
00:05:44.265  Allocate depth: 32
00:05:44.265  # threads/core: 1
00:05:44.265  Run time:       1 seconds
00:05:44.265  Verify:         Yes
00:05:44.265  
00:05:44.265  Running for 1 seconds...
00:05:44.265  
00:05:44.265  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:05:44.265  ------------------------------------------------------------------------------------
00:05:44.265  0,0                      534848/s       2089 MiB/s                0                0
00:05:44.265  ====================================================================================
00:05:44.265  Total                    534848/s       2089 MiB/s                0                0'
00:05:44.265   06:16:00	-- accel/accel.sh@20 -- # IFS=:
00:05:44.265    06:16:00	-- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y
00:05:44.265   06:16:00	-- accel/accel.sh@20 -- # read -r var val
00:05:44.265    06:16:00	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y
00:05:44.265     06:16:00	-- accel/accel.sh@12 -- # build_accel_config
00:05:44.265     06:16:00	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:05:44.265     06:16:00	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:05:44.265     06:16:00	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:05:44.265     06:16:00	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:05:44.265     06:16:00	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:05:44.265     06:16:00	-- accel/accel.sh@41 -- # local IFS=,
00:05:44.265     06:16:00	-- accel/accel.sh@42 -- # jq -r .
00:05:44.265  [2024-12-16 06:16:00.925838] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:44.265  [2024-12-16 06:16:00.925946] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58413 ]
00:05:44.265  [2024-12-16 06:16:01.053542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:44.265  [2024-12-16 06:16:01.128172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:44.265   06:16:01	-- accel/accel.sh@21 -- # val=
00:05:44.265   06:16:01	-- accel/accel.sh@22 -- # case "$var" in
00:05:44.265   06:16:01	-- accel/accel.sh@20 -- # IFS=:
00:05:44.265   06:16:01	-- accel/accel.sh@20 -- # read -r var val
00:05:44.265   06:16:01	-- accel/accel.sh@21 -- # val=
00:05:44.265   06:16:01	-- accel/accel.sh@22 -- # case "$var" in
00:05:44.265   06:16:01	-- accel/accel.sh@20 -- # IFS=:
00:05:44.265   06:16:01	-- accel/accel.sh@20 -- # read -r var val
00:05:44.265   06:16:01	-- accel/accel.sh@21 -- # val=0x1
00:05:44.265   06:16:01	-- accel/accel.sh@22 -- # case "$var" in
00:05:44.265   06:16:01	-- accel/accel.sh@20 -- # IFS=:
00:05:44.265   06:16:01	-- accel/accel.sh@20 -- # read -r var val
00:05:44.265   06:16:01	-- accel/accel.sh@21 -- # val=
00:05:44.265   06:16:01	-- accel/accel.sh@22 -- # case "$var" in
00:05:44.265   06:16:01	-- accel/accel.sh@20 -- # IFS=:
00:05:44.265   06:16:01	-- accel/accel.sh@20 -- # read -r var val
00:05:44.265   06:16:01	-- accel/accel.sh@21 -- # val=
00:05:44.265   06:16:01	-- accel/accel.sh@22 -- # case "$var" in
00:05:44.265   06:16:01	-- accel/accel.sh@20 -- # IFS=:
00:05:44.265   06:16:01	-- accel/accel.sh@20 -- # read -r var val
00:05:44.265   06:16:01	-- accel/accel.sh@21 -- # val=crc32c
00:05:44.265   06:16:01	-- accel/accel.sh@22 -- # case "$var" in
00:05:44.265   06:16:01	-- accel/accel.sh@24 -- # accel_opc=crc32c
00:05:44.265   06:16:01	-- accel/accel.sh@20 -- # IFS=:
00:05:44.265   06:16:01	-- accel/accel.sh@20 -- # read -r var val
00:05:44.265   06:16:01	-- accel/accel.sh@21 -- # val=32
00:05:44.265   06:16:01	-- accel/accel.sh@22 -- # case "$var" in
00:05:44.265   06:16:01	-- accel/accel.sh@20 -- # IFS=:
00:05:44.265   06:16:01	-- accel/accel.sh@20 -- # read -r var val
00:05:44.265   06:16:01	-- accel/accel.sh@21 -- # val='4096 bytes'
00:05:44.265   06:16:01	-- accel/accel.sh@22 -- # case "$var" in
00:05:44.265   06:16:01	-- accel/accel.sh@20 -- # IFS=:
00:05:44.265   06:16:01	-- accel/accel.sh@20 -- # read -r var val
00:05:44.265   06:16:01	-- accel/accel.sh@21 -- # val=
00:05:44.265   06:16:01	-- accel/accel.sh@22 -- # case "$var" in
00:05:44.265   06:16:01	-- accel/accel.sh@20 -- # IFS=:
00:05:44.265   06:16:01	-- accel/accel.sh@20 -- # read -r var val
00:05:44.265   06:16:01	-- accel/accel.sh@21 -- # val=software
00:05:44.265   06:16:01	-- accel/accel.sh@22 -- # case "$var" in
00:05:44.265   06:16:01	-- accel/accel.sh@23 -- # accel_module=software
00:05:44.265   06:16:01	-- accel/accel.sh@20 -- # IFS=:
00:05:44.265   06:16:01	-- accel/accel.sh@20 -- # read -r var val
00:05:44.265   06:16:01	-- accel/accel.sh@21 -- # val=32
00:05:44.265   06:16:01	-- accel/accel.sh@22 -- # case "$var" in
00:05:44.265   06:16:01	-- accel/accel.sh@20 -- # IFS=:
00:05:44.265   06:16:01	-- accel/accel.sh@20 -- # read -r var val
00:05:44.265   06:16:01	-- accel/accel.sh@21 -- # val=32
00:05:44.265   06:16:01	-- accel/accel.sh@22 -- # case "$var" in
00:05:44.265   06:16:01	-- accel/accel.sh@20 -- # IFS=:
00:05:44.265   06:16:01	-- accel/accel.sh@20 -- # read -r var val
00:05:44.265   06:16:01	-- accel/accel.sh@21 -- # val=1
00:05:44.266   06:16:01	-- accel/accel.sh@22 -- # case "$var" in
00:05:44.266   06:16:01	-- accel/accel.sh@20 -- # IFS=:
00:05:44.266   06:16:01	-- accel/accel.sh@20 -- # read -r var val
00:05:44.266   06:16:01	-- accel/accel.sh@21 -- # val='1 seconds'
00:05:44.266   06:16:01	-- accel/accel.sh@22 -- # case "$var" in
00:05:44.266   06:16:01	-- accel/accel.sh@20 -- # IFS=:
00:05:44.266   06:16:01	-- accel/accel.sh@20 -- # read -r var val
00:05:44.266   06:16:01	-- accel/accel.sh@21 -- # val=Yes
00:05:44.266   06:16:01	-- accel/accel.sh@22 -- # case "$var" in
00:05:44.266   06:16:01	-- accel/accel.sh@20 -- # IFS=:
00:05:44.266   06:16:01	-- accel/accel.sh@20 -- # read -r var val
00:05:44.266   06:16:01	-- accel/accel.sh@21 -- # val=
00:05:44.266   06:16:01	-- accel/accel.sh@22 -- # case "$var" in
00:05:44.266   06:16:01	-- accel/accel.sh@20 -- # IFS=:
00:05:44.266   06:16:01	-- accel/accel.sh@20 -- # read -r var val
00:05:44.266   06:16:01	-- accel/accel.sh@21 -- # val=
00:05:44.266   06:16:01	-- accel/accel.sh@22 -- # case "$var" in
00:05:44.266   06:16:01	-- accel/accel.sh@20 -- # IFS=:
00:05:44.266   06:16:01	-- accel/accel.sh@20 -- # read -r var val
00:05:45.642   06:16:02	-- accel/accel.sh@21 -- # val=
00:05:45.642   06:16:02	-- accel/accel.sh@22 -- # case "$var" in
00:05:45.642   06:16:02	-- accel/accel.sh@20 -- # IFS=:
00:05:45.642   06:16:02	-- accel/accel.sh@20 -- # read -r var val
00:05:45.642   06:16:02	-- accel/accel.sh@21 -- # val=
00:05:45.642   06:16:02	-- accel/accel.sh@22 -- # case "$var" in
00:05:45.642   06:16:02	-- accel/accel.sh@20 -- # IFS=:
00:05:45.642   06:16:02	-- accel/accel.sh@20 -- # read -r var val
00:05:45.642   06:16:02	-- accel/accel.sh@21 -- # val=
00:05:45.642   06:16:02	-- accel/accel.sh@22 -- # case "$var" in
00:05:45.642   06:16:02	-- accel/accel.sh@20 -- # IFS=:
00:05:45.642   06:16:02	-- accel/accel.sh@20 -- # read -r var val
00:05:45.642   06:16:02	-- accel/accel.sh@21 -- # val=
00:05:45.642   06:16:02	-- accel/accel.sh@22 -- # case "$var" in
00:05:45.642   06:16:02	-- accel/accel.sh@20 -- # IFS=:
00:05:45.642   06:16:02	-- accel/accel.sh@20 -- # read -r var val
00:05:45.642   06:16:02	-- accel/accel.sh@21 -- # val=
00:05:45.642   06:16:02	-- accel/accel.sh@22 -- # case "$var" in
00:05:45.642   06:16:02	-- accel/accel.sh@20 -- # IFS=:
00:05:45.642   06:16:02	-- accel/accel.sh@20 -- # read -r var val
00:05:45.642   06:16:02	-- accel/accel.sh@21 -- # val=
00:05:45.642   06:16:02	-- accel/accel.sh@22 -- # case "$var" in
00:05:45.642   06:16:02	-- accel/accel.sh@20 -- # IFS=:
00:05:45.642   06:16:02	-- accel/accel.sh@20 -- # read -r var val
00:05:45.642   06:16:02	-- accel/accel.sh@28 -- # [[ -n software ]]
00:05:45.642   06:16:02	-- accel/accel.sh@28 -- # [[ -n crc32c ]]
00:05:45.642   06:16:02	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:05:45.642  
00:05:45.642  real	0m2.912s
00:05:45.642  user	0m2.494s
00:05:45.642  sys	0m0.218s
00:05:45.642   06:16:02	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:45.642   06:16:02	-- common/autotest_common.sh@10 -- # set +x
00:05:45.642  ************************************
00:05:45.642  END TEST accel_crc32c
00:05:45.642  ************************************
00:05:45.642   06:16:02	-- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2
00:05:45.642   06:16:02	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:05:45.642   06:16:02	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:45.642   06:16:02	-- common/autotest_common.sh@10 -- # set +x
00:05:45.642  ************************************
00:05:45.642  START TEST accel_crc32c_C2
00:05:45.642  ************************************
00:05:45.642   06:16:02	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2
00:05:45.642   06:16:02	-- accel/accel.sh@16 -- # local accel_opc
00:05:45.642   06:16:02	-- accel/accel.sh@17 -- # local accel_module
00:05:45.642    06:16:02	-- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2
00:05:45.642    06:16:02	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2
00:05:45.642     06:16:02	-- accel/accel.sh@12 -- # build_accel_config
00:05:45.642     06:16:02	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:05:45.642     06:16:02	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:05:45.642     06:16:02	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:05:45.642     06:16:02	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:05:45.642     06:16:02	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:05:45.642     06:16:02	-- accel/accel.sh@41 -- # local IFS=,
00:05:45.642     06:16:02	-- accel/accel.sh@42 -- # jq -r .
00:05:45.642  [2024-12-16 06:16:02.435913] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:45.642  [2024-12-16 06:16:02.436012] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58453 ]
00:05:45.642  [2024-12-16 06:16:02.562443] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:45.901  [2024-12-16 06:16:02.639537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:47.278   06:16:03	-- accel/accel.sh@18 -- # out='
00:05:47.278  SPDK Configuration:
00:05:47.278  Core mask:      0x1
00:05:47.278  
00:05:47.278  Accel Perf Configuration:
00:05:47.278  Workload Type:  crc32c
00:05:47.278  CRC-32C seed:   0
00:05:47.278  Transfer size:  4096 bytes
00:05:47.278  Vector count    2
00:05:47.278  Module:         software
00:05:47.278  Queue depth:    32
00:05:47.278  Allocate depth: 32
00:05:47.278  # threads/core: 1
00:05:47.278  Run time:       1 seconds
00:05:47.278  Verify:         Yes
00:05:47.278  
00:05:47.278  Running for 1 seconds...
00:05:47.278  
00:05:47.278  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:05:47.278  ------------------------------------------------------------------------------------
00:05:47.278  0,0                      416352/s       3252 MiB/s                0                0
00:05:47.278  ====================================================================================
00:05:47.278  Total                    416352/s       1626 MiB/s                0                0'
00:05:47.278   06:16:03	-- accel/accel.sh@20 -- # IFS=:
00:05:47.278    06:16:03	-- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2
00:05:47.278   06:16:03	-- accel/accel.sh@20 -- # read -r var val
00:05:47.278    06:16:03	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2
00:05:47.278     06:16:03	-- accel/accel.sh@12 -- # build_accel_config
00:05:47.278     06:16:03	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:05:47.278     06:16:03	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:05:47.278     06:16:03	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:05:47.278     06:16:03	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:05:47.278     06:16:03	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:05:47.278     06:16:03	-- accel/accel.sh@41 -- # local IFS=,
00:05:47.278     06:16:03	-- accel/accel.sh@42 -- # jq -r .
00:05:47.278  [2024-12-16 06:16:03.878588] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:47.278  [2024-12-16 06:16:03.878694] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58467 ]
00:05:47.278  [2024-12-16 06:16:04.010582] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:47.278  [2024-12-16 06:16:04.083222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:47.278   06:16:04	-- accel/accel.sh@21 -- # val=
00:05:47.278   06:16:04	-- accel/accel.sh@22 -- # case "$var" in
00:05:47.278   06:16:04	-- accel/accel.sh@20 -- # IFS=:
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # read -r var val
00:05:47.279   06:16:04	-- accel/accel.sh@21 -- # val=
00:05:47.279   06:16:04	-- accel/accel.sh@22 -- # case "$var" in
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # IFS=:
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # read -r var val
00:05:47.279   06:16:04	-- accel/accel.sh@21 -- # val=0x1
00:05:47.279   06:16:04	-- accel/accel.sh@22 -- # case "$var" in
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # IFS=:
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # read -r var val
00:05:47.279   06:16:04	-- accel/accel.sh@21 -- # val=
00:05:47.279   06:16:04	-- accel/accel.sh@22 -- # case "$var" in
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # IFS=:
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # read -r var val
00:05:47.279   06:16:04	-- accel/accel.sh@21 -- # val=
00:05:47.279   06:16:04	-- accel/accel.sh@22 -- # case "$var" in
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # IFS=:
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # read -r var val
00:05:47.279   06:16:04	-- accel/accel.sh@21 -- # val=crc32c
00:05:47.279   06:16:04	-- accel/accel.sh@22 -- # case "$var" in
00:05:47.279   06:16:04	-- accel/accel.sh@24 -- # accel_opc=crc32c
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # IFS=:
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # read -r var val
00:05:47.279   06:16:04	-- accel/accel.sh@21 -- # val=0
00:05:47.279   06:16:04	-- accel/accel.sh@22 -- # case "$var" in
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # IFS=:
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # read -r var val
00:05:47.279   06:16:04	-- accel/accel.sh@21 -- # val='4096 bytes'
00:05:47.279   06:16:04	-- accel/accel.sh@22 -- # case "$var" in
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # IFS=:
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # read -r var val
00:05:47.279   06:16:04	-- accel/accel.sh@21 -- # val=
00:05:47.279   06:16:04	-- accel/accel.sh@22 -- # case "$var" in
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # IFS=:
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # read -r var val
00:05:47.279   06:16:04	-- accel/accel.sh@21 -- # val=software
00:05:47.279   06:16:04	-- accel/accel.sh@22 -- # case "$var" in
00:05:47.279   06:16:04	-- accel/accel.sh@23 -- # accel_module=software
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # IFS=:
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # read -r var val
00:05:47.279   06:16:04	-- accel/accel.sh@21 -- # val=32
00:05:47.279   06:16:04	-- accel/accel.sh@22 -- # case "$var" in
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # IFS=:
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # read -r var val
00:05:47.279   06:16:04	-- accel/accel.sh@21 -- # val=32
00:05:47.279   06:16:04	-- accel/accel.sh@22 -- # case "$var" in
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # IFS=:
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # read -r var val
00:05:47.279   06:16:04	-- accel/accel.sh@21 -- # val=1
00:05:47.279   06:16:04	-- accel/accel.sh@22 -- # case "$var" in
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # IFS=:
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # read -r var val
00:05:47.279   06:16:04	-- accel/accel.sh@21 -- # val='1 seconds'
00:05:47.279   06:16:04	-- accel/accel.sh@22 -- # case "$var" in
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # IFS=:
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # read -r var val
00:05:47.279   06:16:04	-- accel/accel.sh@21 -- # val=Yes
00:05:47.279   06:16:04	-- accel/accel.sh@22 -- # case "$var" in
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # IFS=:
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # read -r var val
00:05:47.279   06:16:04	-- accel/accel.sh@21 -- # val=
00:05:47.279   06:16:04	-- accel/accel.sh@22 -- # case "$var" in
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # IFS=:
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # read -r var val
00:05:47.279   06:16:04	-- accel/accel.sh@21 -- # val=
00:05:47.279   06:16:04	-- accel/accel.sh@22 -- # case "$var" in
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # IFS=:
00:05:47.279   06:16:04	-- accel/accel.sh@20 -- # read -r var val
00:05:48.655   06:16:05	-- accel/accel.sh@21 -- # val=
00:05:48.655   06:16:05	-- accel/accel.sh@22 -- # case "$var" in
00:05:48.655   06:16:05	-- accel/accel.sh@20 -- # IFS=:
00:05:48.655   06:16:05	-- accel/accel.sh@20 -- # read -r var val
00:05:48.655   06:16:05	-- accel/accel.sh@21 -- # val=
00:05:48.655   06:16:05	-- accel/accel.sh@22 -- # case "$var" in
00:05:48.655   06:16:05	-- accel/accel.sh@20 -- # IFS=:
00:05:48.655   06:16:05	-- accel/accel.sh@20 -- # read -r var val
00:05:48.655   06:16:05	-- accel/accel.sh@21 -- # val=
00:05:48.655   06:16:05	-- accel/accel.sh@22 -- # case "$var" in
00:05:48.655   06:16:05	-- accel/accel.sh@20 -- # IFS=:
00:05:48.655   06:16:05	-- accel/accel.sh@20 -- # read -r var val
00:05:48.655   06:16:05	-- accel/accel.sh@21 -- # val=
00:05:48.655   06:16:05	-- accel/accel.sh@22 -- # case "$var" in
00:05:48.655   06:16:05	-- accel/accel.sh@20 -- # IFS=:
00:05:48.655   06:16:05	-- accel/accel.sh@20 -- # read -r var val
00:05:48.655   06:16:05	-- accel/accel.sh@21 -- # val=
00:05:48.655   06:16:05	-- accel/accel.sh@22 -- # case "$var" in
00:05:48.655   06:16:05	-- accel/accel.sh@20 -- # IFS=:
00:05:48.655   06:16:05	-- accel/accel.sh@20 -- # read -r var val
00:05:48.655   06:16:05	-- accel/accel.sh@21 -- # val=
00:05:48.655   06:16:05	-- accel/accel.sh@22 -- # case "$var" in
00:05:48.655   06:16:05	-- accel/accel.sh@20 -- # IFS=:
00:05:48.655   06:16:05	-- accel/accel.sh@20 -- # read -r var val
00:05:48.655   06:16:05	-- accel/accel.sh@28 -- # [[ -n software ]]
00:05:48.655   06:16:05	-- accel/accel.sh@28 -- # [[ -n crc32c ]]
00:05:48.655   06:16:05	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:05:48.655  
00:05:48.655  real	0m2.885s
00:05:48.655  user	0m2.480s
00:05:48.655  sys	0m0.206s
00:05:48.655  ************************************
00:05:48.655  END TEST accel_crc32c_C2
00:05:48.655   06:16:05	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:48.655   06:16:05	-- common/autotest_common.sh@10 -- # set +x
00:05:48.655  ************************************
00:05:48.655   06:16:05	-- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y
00:05:48.655   06:16:05	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:05:48.655   06:16:05	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:48.655   06:16:05	-- common/autotest_common.sh@10 -- # set +x
00:05:48.655  ************************************
00:05:48.655  START TEST accel_copy
00:05:48.655  ************************************
00:05:48.655   06:16:05	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y
00:05:48.655   06:16:05	-- accel/accel.sh@16 -- # local accel_opc
00:05:48.655   06:16:05	-- accel/accel.sh@17 -- # local accel_module
00:05:48.655    06:16:05	-- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y
00:05:48.655    06:16:05	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y
00:05:48.655     06:16:05	-- accel/accel.sh@12 -- # build_accel_config
00:05:48.656     06:16:05	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:05:48.656     06:16:05	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:05:48.656     06:16:05	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:05:48.656     06:16:05	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:05:48.656     06:16:05	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:05:48.656     06:16:05	-- accel/accel.sh@41 -- # local IFS=,
00:05:48.656     06:16:05	-- accel/accel.sh@42 -- # jq -r .
00:05:48.656  [2024-12-16 06:16:05.374818] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:48.656  [2024-12-16 06:16:05.374927] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58501 ]
00:05:48.656  [2024-12-16 06:16:05.510915] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:48.656  [2024-12-16 06:16:05.576006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:50.033   06:16:06	-- accel/accel.sh@18 -- # out='
00:05:50.033  SPDK Configuration:
00:05:50.033  Core mask:      0x1
00:05:50.033  
00:05:50.033  Accel Perf Configuration:
00:05:50.033  Workload Type:  copy
00:05:50.033  Transfer size:  4096 bytes
00:05:50.033  Vector count    1
00:05:50.033  Module:         software
00:05:50.033  Queue depth:    32
00:05:50.033  Allocate depth: 32
00:05:50.033  # threads/core: 1
00:05:50.033  Run time:       1 seconds
00:05:50.033  Verify:         Yes
00:05:50.033  
00:05:50.033  Running for 1 seconds...
00:05:50.033  
00:05:50.033  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:05:50.033  ------------------------------------------------------------------------------------
00:05:50.033  0,0                      378048/s       1476 MiB/s                0                0
00:05:50.033  ====================================================================================
00:05:50.033  Total                    378048/s       1476 MiB/s                0                0'
00:05:50.033   06:16:06	-- accel/accel.sh@20 -- # IFS=:
00:05:50.033    06:16:06	-- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y
00:05:50.033   06:16:06	-- accel/accel.sh@20 -- # read -r var val
00:05:50.033     06:16:06	-- accel/accel.sh@12 -- # build_accel_config
00:05:50.033    06:16:06	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y
00:05:50.033     06:16:06	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:05:50.033     06:16:06	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:05:50.033     06:16:06	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:05:50.033     06:16:06	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:05:50.033     06:16:06	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:05:50.033     06:16:06	-- accel/accel.sh@41 -- # local IFS=,
00:05:50.033     06:16:06	-- accel/accel.sh@42 -- # jq -r .
00:05:50.033  [2024-12-16 06:16:06.810112] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:50.033  [2024-12-16 06:16:06.810208] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58521 ]
00:05:50.033  [2024-12-16 06:16:06.941433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:50.033  [2024-12-16 06:16:07.005035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:50.292   06:16:07	-- accel/accel.sh@21 -- # val=
00:05:50.292   06:16:07	-- accel/accel.sh@22 -- # case "$var" in
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # IFS=:
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # read -r var val
00:05:50.292   06:16:07	-- accel/accel.sh@21 -- # val=
00:05:50.292   06:16:07	-- accel/accel.sh@22 -- # case "$var" in
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # IFS=:
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # read -r var val
00:05:50.292   06:16:07	-- accel/accel.sh@21 -- # val=0x1
00:05:50.292   06:16:07	-- accel/accel.sh@22 -- # case "$var" in
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # IFS=:
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # read -r var val
00:05:50.292   06:16:07	-- accel/accel.sh@21 -- # val=
00:05:50.292   06:16:07	-- accel/accel.sh@22 -- # case "$var" in
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # IFS=:
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # read -r var val
00:05:50.292   06:16:07	-- accel/accel.sh@21 -- # val=
00:05:50.292   06:16:07	-- accel/accel.sh@22 -- # case "$var" in
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # IFS=:
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # read -r var val
00:05:50.292   06:16:07	-- accel/accel.sh@21 -- # val=copy
00:05:50.292   06:16:07	-- accel/accel.sh@22 -- # case "$var" in
00:05:50.292   06:16:07	-- accel/accel.sh@24 -- # accel_opc=copy
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # IFS=:
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # read -r var val
00:05:50.292   06:16:07	-- accel/accel.sh@21 -- # val='4096 bytes'
00:05:50.292   06:16:07	-- accel/accel.sh@22 -- # case "$var" in
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # IFS=:
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # read -r var val
00:05:50.292   06:16:07	-- accel/accel.sh@21 -- # val=
00:05:50.292   06:16:07	-- accel/accel.sh@22 -- # case "$var" in
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # IFS=:
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # read -r var val
00:05:50.292   06:16:07	-- accel/accel.sh@21 -- # val=software
00:05:50.292   06:16:07	-- accel/accel.sh@22 -- # case "$var" in
00:05:50.292   06:16:07	-- accel/accel.sh@23 -- # accel_module=software
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # IFS=:
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # read -r var val
00:05:50.292   06:16:07	-- accel/accel.sh@21 -- # val=32
00:05:50.292   06:16:07	-- accel/accel.sh@22 -- # case "$var" in
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # IFS=:
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # read -r var val
00:05:50.292   06:16:07	-- accel/accel.sh@21 -- # val=32
00:05:50.292   06:16:07	-- accel/accel.sh@22 -- # case "$var" in
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # IFS=:
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # read -r var val
00:05:50.292   06:16:07	-- accel/accel.sh@21 -- # val=1
00:05:50.292   06:16:07	-- accel/accel.sh@22 -- # case "$var" in
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # IFS=:
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # read -r var val
00:05:50.292   06:16:07	-- accel/accel.sh@21 -- # val='1 seconds'
00:05:50.292   06:16:07	-- accel/accel.sh@22 -- # case "$var" in
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # IFS=:
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # read -r var val
00:05:50.292   06:16:07	-- accel/accel.sh@21 -- # val=Yes
00:05:50.292   06:16:07	-- accel/accel.sh@22 -- # case "$var" in
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # IFS=:
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # read -r var val
00:05:50.292   06:16:07	-- accel/accel.sh@21 -- # val=
00:05:50.292   06:16:07	-- accel/accel.sh@22 -- # case "$var" in
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # IFS=:
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # read -r var val
00:05:50.292   06:16:07	-- accel/accel.sh@21 -- # val=
00:05:50.292   06:16:07	-- accel/accel.sh@22 -- # case "$var" in
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # IFS=:
00:05:50.292   06:16:07	-- accel/accel.sh@20 -- # read -r var val
00:05:51.692   06:16:08	-- accel/accel.sh@21 -- # val=
00:05:51.692   06:16:08	-- accel/accel.sh@22 -- # case "$var" in
00:05:51.692   06:16:08	-- accel/accel.sh@20 -- # IFS=:
00:05:51.692   06:16:08	-- accel/accel.sh@20 -- # read -r var val
00:05:51.692   06:16:08	-- accel/accel.sh@21 -- # val=
00:05:51.692   06:16:08	-- accel/accel.sh@22 -- # case "$var" in
00:05:51.692   06:16:08	-- accel/accel.sh@20 -- # IFS=:
00:05:51.692   06:16:08	-- accel/accel.sh@20 -- # read -r var val
00:05:51.692   06:16:08	-- accel/accel.sh@21 -- # val=
00:05:51.692   06:16:08	-- accel/accel.sh@22 -- # case "$var" in
00:05:51.692   06:16:08	-- accel/accel.sh@20 -- # IFS=:
00:05:51.692   06:16:08	-- accel/accel.sh@20 -- # read -r var val
00:05:51.692   06:16:08	-- accel/accel.sh@21 -- # val=
00:05:51.692   06:16:08	-- accel/accel.sh@22 -- # case "$var" in
00:05:51.692   06:16:08	-- accel/accel.sh@20 -- # IFS=:
00:05:51.692   06:16:08	-- accel/accel.sh@20 -- # read -r var val
00:05:51.692   06:16:08	-- accel/accel.sh@21 -- # val=
00:05:51.692   06:16:08	-- accel/accel.sh@22 -- # case "$var" in
00:05:51.692   06:16:08	-- accel/accel.sh@20 -- # IFS=:
00:05:51.692   06:16:08	-- accel/accel.sh@20 -- # read -r var val
00:05:51.692   06:16:08	-- accel/accel.sh@21 -- # val=
00:05:51.692   06:16:08	-- accel/accel.sh@22 -- # case "$var" in
00:05:51.692   06:16:08	-- accel/accel.sh@20 -- # IFS=:
00:05:51.692   06:16:08	-- accel/accel.sh@20 -- # read -r var val
00:05:51.692   06:16:08	-- accel/accel.sh@28 -- # [[ -n software ]]
00:05:51.692   06:16:08	-- accel/accel.sh@28 -- # [[ -n copy ]]
00:05:51.692   06:16:08	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:05:51.692  
00:05:51.692  real	0m2.866s
00:05:51.692  user	0m2.452s
00:05:51.692  sys	0m0.214s
00:05:51.692   06:16:08	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:51.692   06:16:08	-- common/autotest_common.sh@10 -- # set +x
00:05:51.692  ************************************
00:05:51.692  END TEST accel_copy
00:05:51.692  ************************************
00:05:51.692   06:16:08	-- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y
00:05:51.692   06:16:08	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:05:51.692   06:16:08	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:51.692   06:16:08	-- common/autotest_common.sh@10 -- # set +x
00:05:51.692  ************************************
00:05:51.692  START TEST accel_fill
00:05:51.692  ************************************
00:05:51.692   06:16:08	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y
00:05:51.692   06:16:08	-- accel/accel.sh@16 -- # local accel_opc
00:05:51.692   06:16:08	-- accel/accel.sh@17 -- # local accel_module
00:05:51.692    06:16:08	-- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y
00:05:51.692    06:16:08	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y
00:05:51.692     06:16:08	-- accel/accel.sh@12 -- # build_accel_config
00:05:51.692     06:16:08	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:05:51.692     06:16:08	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:05:51.692     06:16:08	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:05:51.692     06:16:08	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:05:51.692     06:16:08	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:05:51.692     06:16:08	-- accel/accel.sh@41 -- # local IFS=,
00:05:51.692     06:16:08	-- accel/accel.sh@42 -- # jq -r .
00:05:51.692  [2024-12-16 06:16:08.292890] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:51.692  [2024-12-16 06:16:08.293001] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58550 ]
00:05:51.692  [2024-12-16 06:16:08.432628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:51.692  [2024-12-16 06:16:08.531500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:53.070   06:16:09	-- accel/accel.sh@18 -- # out='
00:05:53.070  SPDK Configuration:
00:05:53.070  Core mask:      0x1
00:05:53.070  
00:05:53.070  Accel Perf Configuration:
00:05:53.070  Workload Type:  fill
00:05:53.070  Fill pattern:   0x80
00:05:53.070  Transfer size:  4096 bytes
00:05:53.070  Vector count    1
00:05:53.070  Module:         software
00:05:53.070  Queue depth:    64
00:05:53.070  Allocate depth: 64
00:05:53.070  # threads/core: 1
00:05:53.070  Run time:       1 seconds
00:05:53.070  Verify:         Yes
00:05:53.070  
00:05:53.070  Running for 1 seconds...
00:05:53.070  
00:05:53.070  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:05:53.070  ------------------------------------------------------------------------------------
00:05:53.070  0,0                      548736/s       2143 MiB/s                0                0
00:05:53.070  ====================================================================================
00:05:53.070  Total                    548736/s       2143 MiB/s                0                0'
00:05:53.070   06:16:09	-- accel/accel.sh@20 -- # IFS=:
00:05:53.070    06:16:09	-- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y
00:05:53.070   06:16:09	-- accel/accel.sh@20 -- # read -r var val
00:05:53.070    06:16:09	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y
00:05:53.070     06:16:09	-- accel/accel.sh@12 -- # build_accel_config
00:05:53.070     06:16:09	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:05:53.070     06:16:09	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:05:53.070     06:16:09	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:05:53.070     06:16:09	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:05:53.070     06:16:09	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:05:53.070     06:16:09	-- accel/accel.sh@41 -- # local IFS=,
00:05:53.070     06:16:09	-- accel/accel.sh@42 -- # jq -r .
00:05:53.070  [2024-12-16 06:16:09.783409] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:53.070  [2024-12-16 06:16:09.783525] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58575 ]
00:05:53.070  [2024-12-16 06:16:09.919418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:53.070  [2024-12-16 06:16:10.016463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:53.329   06:16:10	-- accel/accel.sh@21 -- # val=
00:05:53.329   06:16:10	-- accel/accel.sh@22 -- # case "$var" in
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # IFS=:
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # read -r var val
00:05:53.329   06:16:10	-- accel/accel.sh@21 -- # val=
00:05:53.329   06:16:10	-- accel/accel.sh@22 -- # case "$var" in
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # IFS=:
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # read -r var val
00:05:53.329   06:16:10	-- accel/accel.sh@21 -- # val=0x1
00:05:53.329   06:16:10	-- accel/accel.sh@22 -- # case "$var" in
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # IFS=:
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # read -r var val
00:05:53.329   06:16:10	-- accel/accel.sh@21 -- # val=
00:05:53.329   06:16:10	-- accel/accel.sh@22 -- # case "$var" in
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # IFS=:
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # read -r var val
00:05:53.329   06:16:10	-- accel/accel.sh@21 -- # val=
00:05:53.329   06:16:10	-- accel/accel.sh@22 -- # case "$var" in
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # IFS=:
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # read -r var val
00:05:53.329   06:16:10	-- accel/accel.sh@21 -- # val=fill
00:05:53.329   06:16:10	-- accel/accel.sh@22 -- # case "$var" in
00:05:53.329   06:16:10	-- accel/accel.sh@24 -- # accel_opc=fill
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # IFS=:
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # read -r var val
00:05:53.329   06:16:10	-- accel/accel.sh@21 -- # val=0x80
00:05:53.329   06:16:10	-- accel/accel.sh@22 -- # case "$var" in
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # IFS=:
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # read -r var val
00:05:53.329   06:16:10	-- accel/accel.sh@21 -- # val='4096 bytes'
00:05:53.329   06:16:10	-- accel/accel.sh@22 -- # case "$var" in
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # IFS=:
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # read -r var val
00:05:53.329   06:16:10	-- accel/accel.sh@21 -- # val=
00:05:53.329   06:16:10	-- accel/accel.sh@22 -- # case "$var" in
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # IFS=:
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # read -r var val
00:05:53.329   06:16:10	-- accel/accel.sh@21 -- # val=software
00:05:53.329   06:16:10	-- accel/accel.sh@22 -- # case "$var" in
00:05:53.329   06:16:10	-- accel/accel.sh@23 -- # accel_module=software
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # IFS=:
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # read -r var val
00:05:53.329   06:16:10	-- accel/accel.sh@21 -- # val=64
00:05:53.329   06:16:10	-- accel/accel.sh@22 -- # case "$var" in
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # IFS=:
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # read -r var val
00:05:53.329   06:16:10	-- accel/accel.sh@21 -- # val=64
00:05:53.329   06:16:10	-- accel/accel.sh@22 -- # case "$var" in
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # IFS=:
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # read -r var val
00:05:53.329   06:16:10	-- accel/accel.sh@21 -- # val=1
00:05:53.329   06:16:10	-- accel/accel.sh@22 -- # case "$var" in
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # IFS=:
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # read -r var val
00:05:53.329   06:16:10	-- accel/accel.sh@21 -- # val='1 seconds'
00:05:53.329   06:16:10	-- accel/accel.sh@22 -- # case "$var" in
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # IFS=:
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # read -r var val
00:05:53.329   06:16:10	-- accel/accel.sh@21 -- # val=Yes
00:05:53.329   06:16:10	-- accel/accel.sh@22 -- # case "$var" in
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # IFS=:
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # read -r var val
00:05:53.329   06:16:10	-- accel/accel.sh@21 -- # val=
00:05:53.329   06:16:10	-- accel/accel.sh@22 -- # case "$var" in
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # IFS=:
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # read -r var val
00:05:53.329   06:16:10	-- accel/accel.sh@21 -- # val=
00:05:53.329   06:16:10	-- accel/accel.sh@22 -- # case "$var" in
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # IFS=:
00:05:53.329   06:16:10	-- accel/accel.sh@20 -- # read -r var val
00:05:54.266   06:16:11	-- accel/accel.sh@21 -- # val=
00:05:54.266   06:16:11	-- accel/accel.sh@22 -- # case "$var" in
00:05:54.266   06:16:11	-- accel/accel.sh@20 -- # IFS=:
00:05:54.266   06:16:11	-- accel/accel.sh@20 -- # read -r var val
00:05:54.266   06:16:11	-- accel/accel.sh@21 -- # val=
00:05:54.266   06:16:11	-- accel/accel.sh@22 -- # case "$var" in
00:05:54.266   06:16:11	-- accel/accel.sh@20 -- # IFS=:
00:05:54.266   06:16:11	-- accel/accel.sh@20 -- # read -r var val
00:05:54.266   06:16:11	-- accel/accel.sh@21 -- # val=
00:05:54.266   06:16:11	-- accel/accel.sh@22 -- # case "$var" in
00:05:54.266   06:16:11	-- accel/accel.sh@20 -- # IFS=:
00:05:54.266   06:16:11	-- accel/accel.sh@20 -- # read -r var val
00:05:54.266   06:16:11	-- accel/accel.sh@21 -- # val=
00:05:54.266   06:16:11	-- accel/accel.sh@22 -- # case "$var" in
00:05:54.266   06:16:11	-- accel/accel.sh@20 -- # IFS=:
00:05:54.266   06:16:11	-- accel/accel.sh@20 -- # read -r var val
00:05:54.526   06:16:11	-- accel/accel.sh@21 -- # val=
00:05:54.526   06:16:11	-- accel/accel.sh@22 -- # case "$var" in
00:05:54.526   06:16:11	-- accel/accel.sh@20 -- # IFS=:
00:05:54.526   06:16:11	-- accel/accel.sh@20 -- # read -r var val
00:05:54.526   06:16:11	-- accel/accel.sh@21 -- # val=
00:05:54.526   06:16:11	-- accel/accel.sh@22 -- # case "$var" in
00:05:54.526   06:16:11	-- accel/accel.sh@20 -- # IFS=:
00:05:54.526   06:16:11	-- accel/accel.sh@20 -- # read -r var val
00:05:54.526   06:16:11	-- accel/accel.sh@28 -- # [[ -n software ]]
00:05:54.526   06:16:11	-- accel/accel.sh@28 -- # [[ -n fill ]]
00:05:54.526   06:16:11	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:05:54.526  
00:05:54.526  real	0m2.975s
00:05:54.526  user	0m2.553s
00:05:54.526  sys	0m0.217s
00:05:54.526   06:16:11	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:54.526   06:16:11	-- common/autotest_common.sh@10 -- # set +x
00:05:54.526  ************************************
00:05:54.526  END TEST accel_fill
00:05:54.526  ************************************
00:05:54.526   06:16:11	-- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y
00:05:54.526   06:16:11	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:05:54.526   06:16:11	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:54.526   06:16:11	-- common/autotest_common.sh@10 -- # set +x
00:05:54.526  ************************************
00:05:54.526  START TEST accel_copy_crc32c
00:05:54.526  ************************************
00:05:54.526   06:16:11	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y
00:05:54.526   06:16:11	-- accel/accel.sh@16 -- # local accel_opc
00:05:54.526   06:16:11	-- accel/accel.sh@17 -- # local accel_module
00:05:54.526    06:16:11	-- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y
00:05:54.526    06:16:11	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y
00:05:54.526     06:16:11	-- accel/accel.sh@12 -- # build_accel_config
00:05:54.526     06:16:11	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:05:54.526     06:16:11	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:05:54.526     06:16:11	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:05:54.526     06:16:11	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:05:54.526     06:16:11	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:05:54.526     06:16:11	-- accel/accel.sh@41 -- # local IFS=,
00:05:54.526     06:16:11	-- accel/accel.sh@42 -- # jq -r .
00:05:54.526  [2024-12-16 06:16:11.322963] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:54.526  [2024-12-16 06:16:11.323079] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58604 ]
00:05:54.526  [2024-12-16 06:16:11.459600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:54.784  [2024-12-16 06:16:11.557418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:56.161   06:16:12	-- accel/accel.sh@18 -- # out='
00:05:56.161  SPDK Configuration:
00:05:56.161  Core mask:      0x1
00:05:56.161  
00:05:56.161  Accel Perf Configuration:
00:05:56.161  Workload Type:  copy_crc32c
00:05:56.161  CRC-32C seed:   0
00:05:56.161  Vector size:    4096 bytes
00:05:56.161  Transfer size:  4096 bytes
00:05:56.161  Vector count    1
00:05:56.161  Module:         software
00:05:56.161  Queue depth:    32
00:05:56.161  Allocate depth: 32
00:05:56.161  # threads/core: 1
00:05:56.162  Run time:       1 seconds
00:05:56.162  Verify:         Yes
00:05:56.162  
00:05:56.162  Running for 1 seconds...
00:05:56.162  
00:05:56.162  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:05:56.162  ------------------------------------------------------------------------------------
00:05:56.162  0,0                      298752/s       1167 MiB/s                0                0
00:05:56.162  ====================================================================================
00:05:56.162  Total                    298752/s       1167 MiB/s                0                0'
00:05:56.162   06:16:12	-- accel/accel.sh@20 -- # IFS=:
00:05:56.162    06:16:12	-- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y
00:05:56.162   06:16:12	-- accel/accel.sh@20 -- # read -r var val
00:05:56.162     06:16:12	-- accel/accel.sh@12 -- # build_accel_config
00:05:56.162    06:16:12	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y
00:05:56.162     06:16:12	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:05:56.162     06:16:12	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:05:56.162     06:16:12	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:05:56.162     06:16:12	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:05:56.162     06:16:12	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:05:56.162     06:16:12	-- accel/accel.sh@41 -- # local IFS=,
00:05:56.162     06:16:12	-- accel/accel.sh@42 -- # jq -r .
00:05:56.162  [2024-12-16 06:16:12.812794] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:56.162  [2024-12-16 06:16:12.812931] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58629 ]
00:05:56.162  [2024-12-16 06:16:12.941796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:56.162  [2024-12-16 06:16:13.037119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:56.162   06:16:13	-- accel/accel.sh@21 -- # val=
00:05:56.162   06:16:13	-- accel/accel.sh@22 -- # case "$var" in
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # IFS=:
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # read -r var val
00:05:56.162   06:16:13	-- accel/accel.sh@21 -- # val=
00:05:56.162   06:16:13	-- accel/accel.sh@22 -- # case "$var" in
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # IFS=:
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # read -r var val
00:05:56.162   06:16:13	-- accel/accel.sh@21 -- # val=0x1
00:05:56.162   06:16:13	-- accel/accel.sh@22 -- # case "$var" in
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # IFS=:
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # read -r var val
00:05:56.162   06:16:13	-- accel/accel.sh@21 -- # val=
00:05:56.162   06:16:13	-- accel/accel.sh@22 -- # case "$var" in
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # IFS=:
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # read -r var val
00:05:56.162   06:16:13	-- accel/accel.sh@21 -- # val=
00:05:56.162   06:16:13	-- accel/accel.sh@22 -- # case "$var" in
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # IFS=:
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # read -r var val
00:05:56.162   06:16:13	-- accel/accel.sh@21 -- # val=copy_crc32c
00:05:56.162   06:16:13	-- accel/accel.sh@22 -- # case "$var" in
00:05:56.162   06:16:13	-- accel/accel.sh@24 -- # accel_opc=copy_crc32c
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # IFS=:
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # read -r var val
00:05:56.162   06:16:13	-- accel/accel.sh@21 -- # val=0
00:05:56.162   06:16:13	-- accel/accel.sh@22 -- # case "$var" in
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # IFS=:
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # read -r var val
00:05:56.162   06:16:13	-- accel/accel.sh@21 -- # val='4096 bytes'
00:05:56.162   06:16:13	-- accel/accel.sh@22 -- # case "$var" in
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # IFS=:
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # read -r var val
00:05:56.162   06:16:13	-- accel/accel.sh@21 -- # val='4096 bytes'
00:05:56.162   06:16:13	-- accel/accel.sh@22 -- # case "$var" in
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # IFS=:
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # read -r var val
00:05:56.162   06:16:13	-- accel/accel.sh@21 -- # val=
00:05:56.162   06:16:13	-- accel/accel.sh@22 -- # case "$var" in
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # IFS=:
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # read -r var val
00:05:56.162   06:16:13	-- accel/accel.sh@21 -- # val=software
00:05:56.162   06:16:13	-- accel/accel.sh@22 -- # case "$var" in
00:05:56.162   06:16:13	-- accel/accel.sh@23 -- # accel_module=software
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # IFS=:
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # read -r var val
00:05:56.162   06:16:13	-- accel/accel.sh@21 -- # val=32
00:05:56.162   06:16:13	-- accel/accel.sh@22 -- # case "$var" in
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # IFS=:
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # read -r var val
00:05:56.162   06:16:13	-- accel/accel.sh@21 -- # val=32
00:05:56.162   06:16:13	-- accel/accel.sh@22 -- # case "$var" in
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # IFS=:
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # read -r var val
00:05:56.162   06:16:13	-- accel/accel.sh@21 -- # val=1
00:05:56.162   06:16:13	-- accel/accel.sh@22 -- # case "$var" in
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # IFS=:
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # read -r var val
00:05:56.162   06:16:13	-- accel/accel.sh@21 -- # val='1 seconds'
00:05:56.162   06:16:13	-- accel/accel.sh@22 -- # case "$var" in
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # IFS=:
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # read -r var val
00:05:56.162   06:16:13	-- accel/accel.sh@21 -- # val=Yes
00:05:56.162   06:16:13	-- accel/accel.sh@22 -- # case "$var" in
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # IFS=:
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # read -r var val
00:05:56.162   06:16:13	-- accel/accel.sh@21 -- # val=
00:05:56.162   06:16:13	-- accel/accel.sh@22 -- # case "$var" in
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # IFS=:
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # read -r var val
00:05:56.162   06:16:13	-- accel/accel.sh@21 -- # val=
00:05:56.162   06:16:13	-- accel/accel.sh@22 -- # case "$var" in
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # IFS=:
00:05:56.162   06:16:13	-- accel/accel.sh@20 -- # read -r var val
00:05:57.539   06:16:14	-- accel/accel.sh@21 -- # val=
00:05:57.539   06:16:14	-- accel/accel.sh@22 -- # case "$var" in
00:05:57.539   06:16:14	-- accel/accel.sh@20 -- # IFS=:
00:05:57.539   06:16:14	-- accel/accel.sh@20 -- # read -r var val
00:05:57.539   06:16:14	-- accel/accel.sh@21 -- # val=
00:05:57.539   06:16:14	-- accel/accel.sh@22 -- # case "$var" in
00:05:57.539   06:16:14	-- accel/accel.sh@20 -- # IFS=:
00:05:57.539   06:16:14	-- accel/accel.sh@20 -- # read -r var val
00:05:57.539   06:16:14	-- accel/accel.sh@21 -- # val=
00:05:57.539   06:16:14	-- accel/accel.sh@22 -- # case "$var" in
00:05:57.539   06:16:14	-- accel/accel.sh@20 -- # IFS=:
00:05:57.539   06:16:14	-- accel/accel.sh@20 -- # read -r var val
00:05:57.539   06:16:14	-- accel/accel.sh@21 -- # val=
00:05:57.539   06:16:14	-- accel/accel.sh@22 -- # case "$var" in
00:05:57.539   06:16:14	-- accel/accel.sh@20 -- # IFS=:
00:05:57.539   06:16:14	-- accel/accel.sh@20 -- # read -r var val
00:05:57.539   06:16:14	-- accel/accel.sh@21 -- # val=
00:05:57.539   06:16:14	-- accel/accel.sh@22 -- # case "$var" in
00:05:57.539   06:16:14	-- accel/accel.sh@20 -- # IFS=:
00:05:57.540   06:16:14	-- accel/accel.sh@20 -- # read -r var val
00:05:57.540   06:16:14	-- accel/accel.sh@21 -- # val=
00:05:57.540   06:16:14	-- accel/accel.sh@22 -- # case "$var" in
00:05:57.540   06:16:14	-- accel/accel.sh@20 -- # IFS=:
00:05:57.540   06:16:14	-- accel/accel.sh@20 -- # read -r var val
00:05:57.540   06:16:14	-- accel/accel.sh@28 -- # [[ -n software ]]
00:05:57.540   06:16:14	-- accel/accel.sh@28 -- # [[ -n copy_crc32c ]]
00:05:57.540   06:16:14	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:05:57.540  
00:05:57.540  real	0m2.968s
00:05:57.540  user	0m2.555s
00:05:57.540  sys	0m0.214s
00:05:57.540   06:16:14	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:57.540   06:16:14	-- common/autotest_common.sh@10 -- # set +x
00:05:57.540  ************************************
00:05:57.540  END TEST accel_copy_crc32c
00:05:57.540  ************************************
00:05:57.540   06:16:14	-- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2
00:05:57.540   06:16:14	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:05:57.540   06:16:14	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:57.540   06:16:14	-- common/autotest_common.sh@10 -- # set +x
00:05:57.540  ************************************
00:05:57.540  START TEST accel_copy_crc32c_C2
00:05:57.540  ************************************
00:05:57.540   06:16:14	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2
00:05:57.540   06:16:14	-- accel/accel.sh@16 -- # local accel_opc
00:05:57.540   06:16:14	-- accel/accel.sh@17 -- # local accel_module
00:05:57.540    06:16:14	-- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2
00:05:57.540    06:16:14	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2
00:05:57.540     06:16:14	-- accel/accel.sh@12 -- # build_accel_config
00:05:57.540     06:16:14	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:05:57.540     06:16:14	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:05:57.540     06:16:14	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:05:57.540     06:16:14	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:05:57.540     06:16:14	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:05:57.540     06:16:14	-- accel/accel.sh@41 -- # local IFS=,
00:05:57.540     06:16:14	-- accel/accel.sh@42 -- # jq -r .
00:05:57.540  [2024-12-16 06:16:14.347235] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:57.540  [2024-12-16 06:16:14.347337] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58658 ]
00:05:57.540  [2024-12-16 06:16:14.480298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:57.799  [2024-12-16 06:16:14.585137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:59.176   06:16:15	-- accel/accel.sh@18 -- # out='
00:05:59.176  SPDK Configuration:
00:05:59.176  Core mask:      0x1
00:05:59.176  
00:05:59.176  Accel Perf Configuration:
00:05:59.176  Workload Type:  copy_crc32c
00:05:59.176  CRC-32C seed:   0
00:05:59.176  Vector size:    4096 bytes
00:05:59.176  Transfer size:  8192 bytes
00:05:59.176  Vector count    2
00:05:59.176  Module:         software
00:05:59.176  Queue depth:    32
00:05:59.176  Allocate depth: 32
00:05:59.176  # threads/core: 1
00:05:59.176  Run time:       1 seconds
00:05:59.176  Verify:         Yes
00:05:59.176  
00:05:59.176  Running for 1 seconds...
00:05:59.176  
00:05:59.176  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:05:59.176  ------------------------------------------------------------------------------------
00:05:59.176  0,0                      214624/s       1676 MiB/s                0                0
00:05:59.176  ====================================================================================
00:05:59.176  Total                    214624/s        838 MiB/s                0                0'
00:05:59.176   06:16:15	-- accel/accel.sh@20 -- # IFS=:
00:05:59.176    06:16:15	-- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2
00:05:59.176   06:16:15	-- accel/accel.sh@20 -- # read -r var val
00:05:59.176     06:16:15	-- accel/accel.sh@12 -- # build_accel_config
00:05:59.176    06:16:15	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2
00:05:59.176     06:16:15	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:05:59.176     06:16:15	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:05:59.176     06:16:15	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:05:59.176     06:16:15	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:05:59.176     06:16:15	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:05:59.176     06:16:15	-- accel/accel.sh@41 -- # local IFS=,
00:05:59.176     06:16:15	-- accel/accel.sh@42 -- # jq -r .
00:05:59.176  [2024-12-16 06:16:15.833856] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:59.176  [2024-12-16 06:16:15.833993] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58679 ]
00:05:59.176  [2024-12-16 06:16:15.970906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:59.176  [2024-12-16 06:16:16.046312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:59.176   06:16:16	-- accel/accel.sh@21 -- # val=
00:05:59.176   06:16:16	-- accel/accel.sh@22 -- # case "$var" in
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # IFS=:
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # read -r var val
00:05:59.176   06:16:16	-- accel/accel.sh@21 -- # val=
00:05:59.176   06:16:16	-- accel/accel.sh@22 -- # case "$var" in
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # IFS=:
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # read -r var val
00:05:59.176   06:16:16	-- accel/accel.sh@21 -- # val=0x1
00:05:59.176   06:16:16	-- accel/accel.sh@22 -- # case "$var" in
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # IFS=:
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # read -r var val
00:05:59.176   06:16:16	-- accel/accel.sh@21 -- # val=
00:05:59.176   06:16:16	-- accel/accel.sh@22 -- # case "$var" in
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # IFS=:
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # read -r var val
00:05:59.176   06:16:16	-- accel/accel.sh@21 -- # val=
00:05:59.176   06:16:16	-- accel/accel.sh@22 -- # case "$var" in
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # IFS=:
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # read -r var val
00:05:59.176   06:16:16	-- accel/accel.sh@21 -- # val=copy_crc32c
00:05:59.176   06:16:16	-- accel/accel.sh@22 -- # case "$var" in
00:05:59.176   06:16:16	-- accel/accel.sh@24 -- # accel_opc=copy_crc32c
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # IFS=:
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # read -r var val
00:05:59.176   06:16:16	-- accel/accel.sh@21 -- # val=0
00:05:59.176   06:16:16	-- accel/accel.sh@22 -- # case "$var" in
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # IFS=:
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # read -r var val
00:05:59.176   06:16:16	-- accel/accel.sh@21 -- # val='4096 bytes'
00:05:59.176   06:16:16	-- accel/accel.sh@22 -- # case "$var" in
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # IFS=:
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # read -r var val
00:05:59.176   06:16:16	-- accel/accel.sh@21 -- # val='8192 bytes'
00:05:59.176   06:16:16	-- accel/accel.sh@22 -- # case "$var" in
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # IFS=:
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # read -r var val
00:05:59.176   06:16:16	-- accel/accel.sh@21 -- # val=
00:05:59.176   06:16:16	-- accel/accel.sh@22 -- # case "$var" in
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # IFS=:
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # read -r var val
00:05:59.176   06:16:16	-- accel/accel.sh@21 -- # val=software
00:05:59.176   06:16:16	-- accel/accel.sh@22 -- # case "$var" in
00:05:59.176   06:16:16	-- accel/accel.sh@23 -- # accel_module=software
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # IFS=:
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # read -r var val
00:05:59.176   06:16:16	-- accel/accel.sh@21 -- # val=32
00:05:59.176   06:16:16	-- accel/accel.sh@22 -- # case "$var" in
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # IFS=:
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # read -r var val
00:05:59.176   06:16:16	-- accel/accel.sh@21 -- # val=32
00:05:59.176   06:16:16	-- accel/accel.sh@22 -- # case "$var" in
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # IFS=:
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # read -r var val
00:05:59.176   06:16:16	-- accel/accel.sh@21 -- # val=1
00:05:59.176   06:16:16	-- accel/accel.sh@22 -- # case "$var" in
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # IFS=:
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # read -r var val
00:05:59.176   06:16:16	-- accel/accel.sh@21 -- # val='1 seconds'
00:05:59.176   06:16:16	-- accel/accel.sh@22 -- # case "$var" in
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # IFS=:
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # read -r var val
00:05:59.176   06:16:16	-- accel/accel.sh@21 -- # val=Yes
00:05:59.176   06:16:16	-- accel/accel.sh@22 -- # case "$var" in
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # IFS=:
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # read -r var val
00:05:59.176   06:16:16	-- accel/accel.sh@21 -- # val=
00:05:59.176   06:16:16	-- accel/accel.sh@22 -- # case "$var" in
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # IFS=:
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # read -r var val
00:05:59.176   06:16:16	-- accel/accel.sh@21 -- # val=
00:05:59.176   06:16:16	-- accel/accel.sh@22 -- # case "$var" in
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # IFS=:
00:05:59.176   06:16:16	-- accel/accel.sh@20 -- # read -r var val
00:06:00.579   06:16:17	-- accel/accel.sh@21 -- # val=
00:06:00.579   06:16:17	-- accel/accel.sh@22 -- # case "$var" in
00:06:00.579   06:16:17	-- accel/accel.sh@20 -- # IFS=:
00:06:00.579   06:16:17	-- accel/accel.sh@20 -- # read -r var val
00:06:00.579   06:16:17	-- accel/accel.sh@21 -- # val=
00:06:00.579   06:16:17	-- accel/accel.sh@22 -- # case "$var" in
00:06:00.579   06:16:17	-- accel/accel.sh@20 -- # IFS=:
00:06:00.579   06:16:17	-- accel/accel.sh@20 -- # read -r var val
00:06:00.579   06:16:17	-- accel/accel.sh@21 -- # val=
00:06:00.579   06:16:17	-- accel/accel.sh@22 -- # case "$var" in
00:06:00.579   06:16:17	-- accel/accel.sh@20 -- # IFS=:
00:06:00.579   06:16:17	-- accel/accel.sh@20 -- # read -r var val
00:06:00.579   06:16:17	-- accel/accel.sh@21 -- # val=
00:06:00.579   06:16:17	-- accel/accel.sh@22 -- # case "$var" in
00:06:00.579   06:16:17	-- accel/accel.sh@20 -- # IFS=:
00:06:00.579   06:16:17	-- accel/accel.sh@20 -- # read -r var val
00:06:00.579  ************************************
00:06:00.579  END TEST accel_copy_crc32c_C2
00:06:00.579  ************************************
00:06:00.579   06:16:17	-- accel/accel.sh@21 -- # val=
00:06:00.579   06:16:17	-- accel/accel.sh@22 -- # case "$var" in
00:06:00.579   06:16:17	-- accel/accel.sh@20 -- # IFS=:
00:06:00.579   06:16:17	-- accel/accel.sh@20 -- # read -r var val
00:06:00.579   06:16:17	-- accel/accel.sh@21 -- # val=
00:06:00.579   06:16:17	-- accel/accel.sh@22 -- # case "$var" in
00:06:00.579   06:16:17	-- accel/accel.sh@20 -- # IFS=:
00:06:00.579   06:16:17	-- accel/accel.sh@20 -- # read -r var val
00:06:00.579   06:16:17	-- accel/accel.sh@28 -- # [[ -n software ]]
00:06:00.579   06:16:17	-- accel/accel.sh@28 -- # [[ -n copy_crc32c ]]
00:06:00.579   06:16:17	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:06:00.579  
00:06:00.579  real	0m2.945s
00:06:00.579  user	0m2.526s
00:06:00.579  sys	0m0.220s
00:06:00.579   06:16:17	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:00.579   06:16:17	-- common/autotest_common.sh@10 -- # set +x
00:06:00.579   06:16:17	-- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y
00:06:00.579   06:16:17	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:06:00.579   06:16:17	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:00.579   06:16:17	-- common/autotest_common.sh@10 -- # set +x
00:06:00.579  ************************************
00:06:00.579  START TEST accel_dualcast
00:06:00.579  ************************************
00:06:00.579   06:16:17	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y
00:06:00.579   06:16:17	-- accel/accel.sh@16 -- # local accel_opc
00:06:00.579   06:16:17	-- accel/accel.sh@17 -- # local accel_module
00:06:00.579    06:16:17	-- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y
00:06:00.579    06:16:17	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y
00:06:00.579     06:16:17	-- accel/accel.sh@12 -- # build_accel_config
00:06:00.579     06:16:17	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:00.579     06:16:17	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:00.579     06:16:17	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:00.579     06:16:17	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:00.579     06:16:17	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:00.579     06:16:17	-- accel/accel.sh@41 -- # local IFS=,
00:06:00.579     06:16:17	-- accel/accel.sh@42 -- # jq -r .
00:06:00.579  [2024-12-16 06:16:17.346351] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:00.579  [2024-12-16 06:16:17.346437] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58712 ]
00:06:00.579  [2024-12-16 06:16:17.477817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:00.579  [2024-12-16 06:16:17.550165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:01.956   06:16:18	-- accel/accel.sh@18 -- # out='
00:06:01.956  SPDK Configuration:
00:06:01.956  Core mask:      0x1
00:06:01.956  
00:06:01.956  Accel Perf Configuration:
00:06:01.956  Workload Type:  dualcast
00:06:01.956  Transfer size:  4096 bytes
00:06:01.956  Vector count    1
00:06:01.956  Module:         software
00:06:01.956  Queue depth:    32
00:06:01.956  Allocate depth: 32
00:06:01.956  # threads/core: 1
00:06:01.956  Run time:       1 seconds
00:06:01.956  Verify:         Yes
00:06:01.956  
00:06:01.956  Running for 1 seconds...
00:06:01.956  
00:06:01.956  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:06:01.956  ------------------------------------------------------------------------------------
00:06:01.956  0,0                      416096/s       1625 MiB/s                0                0
00:06:01.956  ====================================================================================
00:06:01.956  Total                    416096/s       1625 MiB/s                0                0'
00:06:01.956   06:16:18	-- accel/accel.sh@20 -- # IFS=:
00:06:01.956    06:16:18	-- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y
00:06:01.956   06:16:18	-- accel/accel.sh@20 -- # read -r var val
00:06:01.956    06:16:18	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y
00:06:01.956     06:16:18	-- accel/accel.sh@12 -- # build_accel_config
00:06:01.956     06:16:18	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:01.956     06:16:18	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:01.956     06:16:18	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:01.956     06:16:18	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:01.956     06:16:18	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:01.956     06:16:18	-- accel/accel.sh@41 -- # local IFS=,
00:06:01.956     06:16:18	-- accel/accel.sh@42 -- # jq -r .
00:06:01.956  [2024-12-16 06:16:18.789076] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:01.956  [2024-12-16 06:16:18.789170] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58732 ]
00:06:01.956  [2024-12-16 06:16:18.924069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:02.215  [2024-12-16 06:16:18.998840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:02.215   06:16:19	-- accel/accel.sh@21 -- # val=
00:06:02.215   06:16:19	-- accel/accel.sh@22 -- # case "$var" in
00:06:02.215   06:16:19	-- accel/accel.sh@20 -- # IFS=:
00:06:02.215   06:16:19	-- accel/accel.sh@20 -- # read -r var val
00:06:02.215   06:16:19	-- accel/accel.sh@21 -- # val=
00:06:02.215   06:16:19	-- accel/accel.sh@22 -- # case "$var" in
00:06:02.215   06:16:19	-- accel/accel.sh@20 -- # IFS=:
00:06:02.215   06:16:19	-- accel/accel.sh@20 -- # read -r var val
00:06:02.215   06:16:19	-- accel/accel.sh@21 -- # val=0x1
00:06:02.215   06:16:19	-- accel/accel.sh@22 -- # case "$var" in
00:06:02.215   06:16:19	-- accel/accel.sh@20 -- # IFS=:
00:06:02.215   06:16:19	-- accel/accel.sh@20 -- # read -r var val
00:06:02.215   06:16:19	-- accel/accel.sh@21 -- # val=
00:06:02.215   06:16:19	-- accel/accel.sh@22 -- # case "$var" in
00:06:02.215   06:16:19	-- accel/accel.sh@20 -- # IFS=:
00:06:02.215   06:16:19	-- accel/accel.sh@20 -- # read -r var val
00:06:02.215   06:16:19	-- accel/accel.sh@21 -- # val=
00:06:02.215   06:16:19	-- accel/accel.sh@22 -- # case "$var" in
00:06:02.215   06:16:19	-- accel/accel.sh@20 -- # IFS=:
00:06:02.215   06:16:19	-- accel/accel.sh@20 -- # read -r var val
00:06:02.215   06:16:19	-- accel/accel.sh@21 -- # val=dualcast
00:06:02.215   06:16:19	-- accel/accel.sh@22 -- # case "$var" in
00:06:02.215   06:16:19	-- accel/accel.sh@24 -- # accel_opc=dualcast
00:06:02.215   06:16:19	-- accel/accel.sh@20 -- # IFS=:
00:06:02.215   06:16:19	-- accel/accel.sh@20 -- # read -r var val
00:06:02.216   06:16:19	-- accel/accel.sh@21 -- # val='4096 bytes'
00:06:02.216   06:16:19	-- accel/accel.sh@22 -- # case "$var" in
00:06:02.216   06:16:19	-- accel/accel.sh@20 -- # IFS=:
00:06:02.216   06:16:19	-- accel/accel.sh@20 -- # read -r var val
00:06:02.216   06:16:19	-- accel/accel.sh@21 -- # val=
00:06:02.216   06:16:19	-- accel/accel.sh@22 -- # case "$var" in
00:06:02.216   06:16:19	-- accel/accel.sh@20 -- # IFS=:
00:06:02.216   06:16:19	-- accel/accel.sh@20 -- # read -r var val
00:06:02.216   06:16:19	-- accel/accel.sh@21 -- # val=software
00:06:02.216   06:16:19	-- accel/accel.sh@22 -- # case "$var" in
00:06:02.216   06:16:19	-- accel/accel.sh@23 -- # accel_module=software
00:06:02.216   06:16:19	-- accel/accel.sh@20 -- # IFS=:
00:06:02.216   06:16:19	-- accel/accel.sh@20 -- # read -r var val
00:06:02.216   06:16:19	-- accel/accel.sh@21 -- # val=32
00:06:02.216   06:16:19	-- accel/accel.sh@22 -- # case "$var" in
00:06:02.216   06:16:19	-- accel/accel.sh@20 -- # IFS=:
00:06:02.216   06:16:19	-- accel/accel.sh@20 -- # read -r var val
00:06:02.216   06:16:19	-- accel/accel.sh@21 -- # val=32
00:06:02.216   06:16:19	-- accel/accel.sh@22 -- # case "$var" in
00:06:02.216   06:16:19	-- accel/accel.sh@20 -- # IFS=:
00:06:02.216   06:16:19	-- accel/accel.sh@20 -- # read -r var val
00:06:02.216   06:16:19	-- accel/accel.sh@21 -- # val=1
00:06:02.216   06:16:19	-- accel/accel.sh@22 -- # case "$var" in
00:06:02.216   06:16:19	-- accel/accel.sh@20 -- # IFS=:
00:06:02.216   06:16:19	-- accel/accel.sh@20 -- # read -r var val
00:06:02.216   06:16:19	-- accel/accel.sh@21 -- # val='1 seconds'
00:06:02.216   06:16:19	-- accel/accel.sh@22 -- # case "$var" in
00:06:02.216   06:16:19	-- accel/accel.sh@20 -- # IFS=:
00:06:02.216   06:16:19	-- accel/accel.sh@20 -- # read -r var val
00:06:02.216   06:16:19	-- accel/accel.sh@21 -- # val=Yes
00:06:02.216   06:16:19	-- accel/accel.sh@22 -- # case "$var" in
00:06:02.216   06:16:19	-- accel/accel.sh@20 -- # IFS=:
00:06:02.216   06:16:19	-- accel/accel.sh@20 -- # read -r var val
00:06:02.216   06:16:19	-- accel/accel.sh@21 -- # val=
00:06:02.216   06:16:19	-- accel/accel.sh@22 -- # case "$var" in
00:06:02.216   06:16:19	-- accel/accel.sh@20 -- # IFS=:
00:06:02.216   06:16:19	-- accel/accel.sh@20 -- # read -r var val
00:06:02.216   06:16:19	-- accel/accel.sh@21 -- # val=
00:06:02.216   06:16:19	-- accel/accel.sh@22 -- # case "$var" in
00:06:02.216   06:16:19	-- accel/accel.sh@20 -- # IFS=:
00:06:02.216   06:16:19	-- accel/accel.sh@20 -- # read -r var val
00:06:03.593   06:16:20	-- accel/accel.sh@21 -- # val=
00:06:03.593   06:16:20	-- accel/accel.sh@22 -- # case "$var" in
00:06:03.593   06:16:20	-- accel/accel.sh@20 -- # IFS=:
00:06:03.593   06:16:20	-- accel/accel.sh@20 -- # read -r var val
00:06:03.593   06:16:20	-- accel/accel.sh@21 -- # val=
00:06:03.593   06:16:20	-- accel/accel.sh@22 -- # case "$var" in
00:06:03.593   06:16:20	-- accel/accel.sh@20 -- # IFS=:
00:06:03.593   06:16:20	-- accel/accel.sh@20 -- # read -r var val
00:06:03.593   06:16:20	-- accel/accel.sh@21 -- # val=
00:06:03.593   06:16:20	-- accel/accel.sh@22 -- # case "$var" in
00:06:03.593   06:16:20	-- accel/accel.sh@20 -- # IFS=:
00:06:03.593   06:16:20	-- accel/accel.sh@20 -- # read -r var val
00:06:03.593   06:16:20	-- accel/accel.sh@21 -- # val=
00:06:03.593   06:16:20	-- accel/accel.sh@22 -- # case "$var" in
00:06:03.593   06:16:20	-- accel/accel.sh@20 -- # IFS=:
00:06:03.593   06:16:20	-- accel/accel.sh@20 -- # read -r var val
00:06:03.593  ************************************
00:06:03.593  END TEST accel_dualcast
00:06:03.593  ************************************
00:06:03.593   06:16:20	-- accel/accel.sh@21 -- # val=
00:06:03.593   06:16:20	-- accel/accel.sh@22 -- # case "$var" in
00:06:03.593   06:16:20	-- accel/accel.sh@20 -- # IFS=:
00:06:03.593   06:16:20	-- accel/accel.sh@20 -- # read -r var val
00:06:03.593   06:16:20	-- accel/accel.sh@21 -- # val=
00:06:03.593   06:16:20	-- accel/accel.sh@22 -- # case "$var" in
00:06:03.593   06:16:20	-- accel/accel.sh@20 -- # IFS=:
00:06:03.593   06:16:20	-- accel/accel.sh@20 -- # read -r var val
00:06:03.593   06:16:20	-- accel/accel.sh@28 -- # [[ -n software ]]
00:06:03.593   06:16:20	-- accel/accel.sh@28 -- # [[ -n dualcast ]]
00:06:03.593   06:16:20	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:06:03.593  
00:06:03.593  real	0m2.893s
00:06:03.593  user	0m2.488s
00:06:03.593  sys	0m0.203s
00:06:03.593   06:16:20	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:03.593   06:16:20	-- common/autotest_common.sh@10 -- # set +x
00:06:03.593   06:16:20	-- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y
00:06:03.593   06:16:20	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:06:03.593   06:16:20	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:03.593   06:16:20	-- common/autotest_common.sh@10 -- # set +x
00:06:03.593  ************************************
00:06:03.593  START TEST accel_compare
00:06:03.593  ************************************
00:06:03.593   06:16:20	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y
00:06:03.593   06:16:20	-- accel/accel.sh@16 -- # local accel_opc
00:06:03.593   06:16:20	-- accel/accel.sh@17 -- # local accel_module
00:06:03.593    06:16:20	-- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y
00:06:03.593    06:16:20	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y
00:06:03.593     06:16:20	-- accel/accel.sh@12 -- # build_accel_config
00:06:03.593     06:16:20	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:03.593     06:16:20	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:03.593     06:16:20	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:03.593     06:16:20	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:03.593     06:16:20	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:03.593     06:16:20	-- accel/accel.sh@41 -- # local IFS=,
00:06:03.593     06:16:20	-- accel/accel.sh@42 -- # jq -r .
00:06:03.593  [2024-12-16 06:16:20.297062] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:03.593  [2024-12-16 06:16:20.297156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58766 ]
00:06:03.593  [2024-12-16 06:16:20.432051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:03.593  [2024-12-16 06:16:20.500386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:04.970   06:16:21	-- accel/accel.sh@18 -- # out='
00:06:04.970  SPDK Configuration:
00:06:04.970  Core mask:      0x1
00:06:04.970  
00:06:04.970  Accel Perf Configuration:
00:06:04.970  Workload Type:  compare
00:06:04.970  Transfer size:  4096 bytes
00:06:04.970  Vector count    1
00:06:04.970  Module:         software
00:06:04.970  Queue depth:    32
00:06:04.970  Allocate depth: 32
00:06:04.970  # threads/core: 1
00:06:04.970  Run time:       1 seconds
00:06:04.970  Verify:         Yes
00:06:04.970  
00:06:04.970  Running for 1 seconds...
00:06:04.970  
00:06:04.970  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:06:04.970  ------------------------------------------------------------------------------------
00:06:04.970  0,0                      540480/s       2111 MiB/s                0                0
00:06:04.970  ====================================================================================
00:06:04.970  Total                    540480/s       2111 MiB/s                0                0'
00:06:04.970   06:16:21	-- accel/accel.sh@20 -- # IFS=:
00:06:04.970   06:16:21	-- accel/accel.sh@20 -- # read -r var val
00:06:04.970    06:16:21	-- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y
00:06:04.970     06:16:21	-- accel/accel.sh@12 -- # build_accel_config
00:06:04.970    06:16:21	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y
00:06:04.970     06:16:21	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:04.970     06:16:21	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:04.970     06:16:21	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:04.970     06:16:21	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:04.970     06:16:21	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:04.970     06:16:21	-- accel/accel.sh@41 -- # local IFS=,
00:06:04.970     06:16:21	-- accel/accel.sh@42 -- # jq -r .
00:06:04.970  [2024-12-16 06:16:21.753250] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:04.970  [2024-12-16 06:16:21.753356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58786 ]
00:06:04.970  [2024-12-16 06:16:21.889896] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:05.229  [2024-12-16 06:16:21.960388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:05.229   06:16:22	-- accel/accel.sh@21 -- # val=
00:06:05.229   06:16:22	-- accel/accel.sh@22 -- # case "$var" in
00:06:05.229   06:16:22	-- accel/accel.sh@20 -- # IFS=:
00:06:05.229   06:16:22	-- accel/accel.sh@20 -- # read -r var val
00:06:05.229   06:16:22	-- accel/accel.sh@21 -- # val=
00:06:05.229   06:16:22	-- accel/accel.sh@22 -- # case "$var" in
00:06:05.229   06:16:22	-- accel/accel.sh@20 -- # IFS=:
00:06:05.229   06:16:22	-- accel/accel.sh@20 -- # read -r var val
00:06:05.229   06:16:22	-- accel/accel.sh@21 -- # val=0x1
00:06:05.229   06:16:22	-- accel/accel.sh@22 -- # case "$var" in
00:06:05.229   06:16:22	-- accel/accel.sh@20 -- # IFS=:
00:06:05.229   06:16:22	-- accel/accel.sh@20 -- # read -r var val
00:06:05.229   06:16:22	-- accel/accel.sh@21 -- # val=
00:06:05.229   06:16:22	-- accel/accel.sh@22 -- # case "$var" in
00:06:05.229   06:16:22	-- accel/accel.sh@20 -- # IFS=:
00:06:05.229   06:16:22	-- accel/accel.sh@20 -- # read -r var val
00:06:05.229   06:16:22	-- accel/accel.sh@21 -- # val=
00:06:05.229   06:16:22	-- accel/accel.sh@22 -- # case "$var" in
00:06:05.229   06:16:22	-- accel/accel.sh@20 -- # IFS=:
00:06:05.229   06:16:22	-- accel/accel.sh@20 -- # read -r var val
00:06:05.229   06:16:22	-- accel/accel.sh@21 -- # val=compare
00:06:05.229   06:16:22	-- accel/accel.sh@22 -- # case "$var" in
00:06:05.229   06:16:22	-- accel/accel.sh@24 -- # accel_opc=compare
00:06:05.229   06:16:22	-- accel/accel.sh@20 -- # IFS=:
00:06:05.229   06:16:22	-- accel/accel.sh@20 -- # read -r var val
00:06:05.229   06:16:22	-- accel/accel.sh@21 -- # val='4096 bytes'
00:06:05.229   06:16:22	-- accel/accel.sh@22 -- # case "$var" in
00:06:05.229   06:16:22	-- accel/accel.sh@20 -- # IFS=:
00:06:05.229   06:16:22	-- accel/accel.sh@20 -- # read -r var val
00:06:05.229   06:16:22	-- accel/accel.sh@21 -- # val=
00:06:05.229   06:16:22	-- accel/accel.sh@22 -- # case "$var" in
00:06:05.229   06:16:22	-- accel/accel.sh@20 -- # IFS=:
00:06:05.229   06:16:22	-- accel/accel.sh@20 -- # read -r var val
00:06:05.229   06:16:22	-- accel/accel.sh@21 -- # val=software
00:06:05.229   06:16:22	-- accel/accel.sh@22 -- # case "$var" in
00:06:05.229   06:16:22	-- accel/accel.sh@23 -- # accel_module=software
00:06:05.229   06:16:22	-- accel/accel.sh@20 -- # IFS=:
00:06:05.230   06:16:22	-- accel/accel.sh@20 -- # read -r var val
00:06:05.230   06:16:22	-- accel/accel.sh@21 -- # val=32
00:06:05.230   06:16:22	-- accel/accel.sh@22 -- # case "$var" in
00:06:05.230   06:16:22	-- accel/accel.sh@20 -- # IFS=:
00:06:05.230   06:16:22	-- accel/accel.sh@20 -- # read -r var val
00:06:05.230   06:16:22	-- accel/accel.sh@21 -- # val=32
00:06:05.230   06:16:22	-- accel/accel.sh@22 -- # case "$var" in
00:06:05.230   06:16:22	-- accel/accel.sh@20 -- # IFS=:
00:06:05.230   06:16:22	-- accel/accel.sh@20 -- # read -r var val
00:06:05.230   06:16:22	-- accel/accel.sh@21 -- # val=1
00:06:05.230   06:16:22	-- accel/accel.sh@22 -- # case "$var" in
00:06:05.230   06:16:22	-- accel/accel.sh@20 -- # IFS=:
00:06:05.230   06:16:22	-- accel/accel.sh@20 -- # read -r var val
00:06:05.230   06:16:22	-- accel/accel.sh@21 -- # val='1 seconds'
00:06:05.230   06:16:22	-- accel/accel.sh@22 -- # case "$var" in
00:06:05.230   06:16:22	-- accel/accel.sh@20 -- # IFS=:
00:06:05.230   06:16:22	-- accel/accel.sh@20 -- # read -r var val
00:06:05.230   06:16:22	-- accel/accel.sh@21 -- # val=Yes
00:06:05.230   06:16:22	-- accel/accel.sh@22 -- # case "$var" in
00:06:05.230   06:16:22	-- accel/accel.sh@20 -- # IFS=:
00:06:05.230   06:16:22	-- accel/accel.sh@20 -- # read -r var val
00:06:05.230   06:16:22	-- accel/accel.sh@21 -- # val=
00:06:05.230   06:16:22	-- accel/accel.sh@22 -- # case "$var" in
00:06:05.230   06:16:22	-- accel/accel.sh@20 -- # IFS=:
00:06:05.230   06:16:22	-- accel/accel.sh@20 -- # read -r var val
00:06:05.230   06:16:22	-- accel/accel.sh@21 -- # val=
00:06:05.230   06:16:22	-- accel/accel.sh@22 -- # case "$var" in
00:06:05.230   06:16:22	-- accel/accel.sh@20 -- # IFS=:
00:06:05.230   06:16:22	-- accel/accel.sh@20 -- # read -r var val
00:06:06.607   06:16:23	-- accel/accel.sh@21 -- # val=
00:06:06.607   06:16:23	-- accel/accel.sh@22 -- # case "$var" in
00:06:06.607   06:16:23	-- accel/accel.sh@20 -- # IFS=:
00:06:06.607   06:16:23	-- accel/accel.sh@20 -- # read -r var val
00:06:06.607   06:16:23	-- accel/accel.sh@21 -- # val=
00:06:06.607   06:16:23	-- accel/accel.sh@22 -- # case "$var" in
00:06:06.607   06:16:23	-- accel/accel.sh@20 -- # IFS=:
00:06:06.607   06:16:23	-- accel/accel.sh@20 -- # read -r var val
00:06:06.607   06:16:23	-- accel/accel.sh@21 -- # val=
00:06:06.607   06:16:23	-- accel/accel.sh@22 -- # case "$var" in
00:06:06.607   06:16:23	-- accel/accel.sh@20 -- # IFS=:
00:06:06.607   06:16:23	-- accel/accel.sh@20 -- # read -r var val
00:06:06.607   06:16:23	-- accel/accel.sh@21 -- # val=
00:06:06.607   06:16:23	-- accel/accel.sh@22 -- # case "$var" in
00:06:06.607   06:16:23	-- accel/accel.sh@20 -- # IFS=:
00:06:06.607   06:16:23	-- accel/accel.sh@20 -- # read -r var val
00:06:06.607   06:16:23	-- accel/accel.sh@21 -- # val=
00:06:06.607   06:16:23	-- accel/accel.sh@22 -- # case "$var" in
00:06:06.607   06:16:23	-- accel/accel.sh@20 -- # IFS=:
00:06:06.607   06:16:23	-- accel/accel.sh@20 -- # read -r var val
00:06:06.607   06:16:23	-- accel/accel.sh@21 -- # val=
00:06:06.607   06:16:23	-- accel/accel.sh@22 -- # case "$var" in
00:06:06.607   06:16:23	-- accel/accel.sh@20 -- # IFS=:
00:06:06.607   06:16:23	-- accel/accel.sh@20 -- # read -r var val
00:06:06.607   06:16:23	-- accel/accel.sh@28 -- # [[ -n software ]]
00:06:06.607   06:16:23	-- accel/accel.sh@28 -- # [[ -n compare ]]
00:06:06.607   06:16:23	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:06:06.607  
00:06:06.607  real	0m2.923s
00:06:06.607  user	0m2.497s
00:06:06.607  sys	0m0.223s
00:06:06.607  ************************************
00:06:06.607  END TEST accel_compare
00:06:06.607  ************************************
00:06:06.607   06:16:23	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:06.607   06:16:23	-- common/autotest_common.sh@10 -- # set +x
00:06:06.607   06:16:23	-- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y
00:06:06.607   06:16:23	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:06:06.607   06:16:23	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:06.607   06:16:23	-- common/autotest_common.sh@10 -- # set +x
00:06:06.607  ************************************
00:06:06.607  START TEST accel_xor
00:06:06.607  ************************************
00:06:06.607   06:16:23	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y
00:06:06.607   06:16:23	-- accel/accel.sh@16 -- # local accel_opc
00:06:06.607   06:16:23	-- accel/accel.sh@17 -- # local accel_module
00:06:06.607    06:16:23	-- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y
00:06:06.607    06:16:23	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y
00:06:06.607     06:16:23	-- accel/accel.sh@12 -- # build_accel_config
00:06:06.607     06:16:23	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:06.607     06:16:23	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:06.607     06:16:23	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:06.607     06:16:23	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:06.608     06:16:23	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:06.608     06:16:23	-- accel/accel.sh@41 -- # local IFS=,
00:06:06.608     06:16:23	-- accel/accel.sh@42 -- # jq -r .
00:06:06.608  [2024-12-16 06:16:23.266866] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:06.608  [2024-12-16 06:16:23.267146] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58820 ]
00:06:06.608  [2024-12-16 06:16:23.403900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:06.608  [2024-12-16 06:16:23.471210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:07.983   06:16:24	-- accel/accel.sh@18 -- # out='
00:06:07.983  SPDK Configuration:
00:06:07.983  Core mask:      0x1
00:06:07.983  
00:06:07.983  Accel Perf Configuration:
00:06:07.983  Workload Type:  xor
00:06:07.983  Source buffers: 2
00:06:07.983  Transfer size:  4096 bytes
00:06:07.983  Vector count    1
00:06:07.983  Module:         software
00:06:07.983  Queue depth:    32
00:06:07.983  Allocate depth: 32
00:06:07.983  # threads/core: 1
00:06:07.983  Run time:       1 seconds
00:06:07.983  Verify:         Yes
00:06:07.983  
00:06:07.983  Running for 1 seconds...
00:06:07.983  
00:06:07.983  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:06:07.983  ------------------------------------------------------------------------------------
00:06:07.983  0,0                      274944/s       1074 MiB/s                0                0
00:06:07.983  ====================================================================================
00:06:07.983  Total                    274944/s       1074 MiB/s                0                0'
00:06:07.983   06:16:24	-- accel/accel.sh@20 -- # IFS=:
00:06:07.983    06:16:24	-- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y
00:06:07.983   06:16:24	-- accel/accel.sh@20 -- # read -r var val
00:06:07.983    06:16:24	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y
00:06:07.983     06:16:24	-- accel/accel.sh@12 -- # build_accel_config
00:06:07.984     06:16:24	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:07.984     06:16:24	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:07.984     06:16:24	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:07.984     06:16:24	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:07.984     06:16:24	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:07.984     06:16:24	-- accel/accel.sh@41 -- # local IFS=,
00:06:07.984     06:16:24	-- accel/accel.sh@42 -- # jq -r .
00:06:07.984  [2024-12-16 06:16:24.725546] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:07.984  [2024-12-16 06:16:24.725644] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58834 ]
00:06:07.984  [2024-12-16 06:16:24.864458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:07.984  [2024-12-16 06:16:24.934394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:08.243   06:16:24	-- accel/accel.sh@21 -- # val=
00:06:08.243   06:16:24	-- accel/accel.sh@22 -- # case "$var" in
00:06:08.243   06:16:24	-- accel/accel.sh@20 -- # IFS=:
00:06:08.243   06:16:24	-- accel/accel.sh@20 -- # read -r var val
00:06:08.243   06:16:24	-- accel/accel.sh@21 -- # val=
00:06:08.243   06:16:24	-- accel/accel.sh@22 -- # case "$var" in
00:06:08.243   06:16:24	-- accel/accel.sh@20 -- # IFS=:
00:06:08.243   06:16:24	-- accel/accel.sh@20 -- # read -r var val
00:06:08.243   06:16:24	-- accel/accel.sh@21 -- # val=0x1
00:06:08.243   06:16:24	-- accel/accel.sh@22 -- # case "$var" in
00:06:08.243   06:16:24	-- accel/accel.sh@20 -- # IFS=:
00:06:08.243   06:16:24	-- accel/accel.sh@20 -- # read -r var val
00:06:08.243   06:16:24	-- accel/accel.sh@21 -- # val=
00:06:08.243   06:16:24	-- accel/accel.sh@22 -- # case "$var" in
00:06:08.243   06:16:24	-- accel/accel.sh@20 -- # IFS=:
00:06:08.243   06:16:24	-- accel/accel.sh@20 -- # read -r var val
00:06:08.243   06:16:24	-- accel/accel.sh@21 -- # val=
00:06:08.243   06:16:24	-- accel/accel.sh@22 -- # case "$var" in
00:06:08.243   06:16:24	-- accel/accel.sh@20 -- # IFS=:
00:06:08.243   06:16:24	-- accel/accel.sh@20 -- # read -r var val
00:06:08.243   06:16:24	-- accel/accel.sh@21 -- # val=xor
00:06:08.243   06:16:24	-- accel/accel.sh@22 -- # case "$var" in
00:06:08.243   06:16:24	-- accel/accel.sh@24 -- # accel_opc=xor
00:06:08.243   06:16:24	-- accel/accel.sh@20 -- # IFS=:
00:06:08.243   06:16:24	-- accel/accel.sh@20 -- # read -r var val
00:06:08.243   06:16:24	-- accel/accel.sh@21 -- # val=2
00:06:08.243   06:16:24	-- accel/accel.sh@22 -- # case "$var" in
00:06:08.243   06:16:24	-- accel/accel.sh@20 -- # IFS=:
00:06:08.243   06:16:24	-- accel/accel.sh@20 -- # read -r var val
00:06:08.243   06:16:24	-- accel/accel.sh@21 -- # val='4096 bytes'
00:06:08.243   06:16:25	-- accel/accel.sh@22 -- # case "$var" in
00:06:08.243   06:16:25	-- accel/accel.sh@20 -- # IFS=:
00:06:08.243   06:16:25	-- accel/accel.sh@20 -- # read -r var val
00:06:08.243   06:16:25	-- accel/accel.sh@21 -- # val=
00:06:08.243   06:16:25	-- accel/accel.sh@22 -- # case "$var" in
00:06:08.243   06:16:25	-- accel/accel.sh@20 -- # IFS=:
00:06:08.243   06:16:25	-- accel/accel.sh@20 -- # read -r var val
00:06:08.243   06:16:25	-- accel/accel.sh@21 -- # val=software
00:06:08.243   06:16:25	-- accel/accel.sh@22 -- # case "$var" in
00:06:08.243   06:16:25	-- accel/accel.sh@23 -- # accel_module=software
00:06:08.243   06:16:25	-- accel/accel.sh@20 -- # IFS=:
00:06:08.243   06:16:25	-- accel/accel.sh@20 -- # read -r var val
00:06:08.243   06:16:25	-- accel/accel.sh@21 -- # val=32
00:06:08.243   06:16:25	-- accel/accel.sh@22 -- # case "$var" in
00:06:08.243   06:16:25	-- accel/accel.sh@20 -- # IFS=:
00:06:08.243   06:16:25	-- accel/accel.sh@20 -- # read -r var val
00:06:08.243   06:16:25	-- accel/accel.sh@21 -- # val=32
00:06:08.243   06:16:25	-- accel/accel.sh@22 -- # case "$var" in
00:06:08.243   06:16:25	-- accel/accel.sh@20 -- # IFS=:
00:06:08.243   06:16:25	-- accel/accel.sh@20 -- # read -r var val
00:06:08.243   06:16:25	-- accel/accel.sh@21 -- # val=1
00:06:08.243   06:16:25	-- accel/accel.sh@22 -- # case "$var" in
00:06:08.243   06:16:25	-- accel/accel.sh@20 -- # IFS=:
00:06:08.243   06:16:25	-- accel/accel.sh@20 -- # read -r var val
00:06:08.243   06:16:25	-- accel/accel.sh@21 -- # val='1 seconds'
00:06:08.243   06:16:25	-- accel/accel.sh@22 -- # case "$var" in
00:06:08.243   06:16:25	-- accel/accel.sh@20 -- # IFS=:
00:06:08.243   06:16:25	-- accel/accel.sh@20 -- # read -r var val
00:06:08.243   06:16:25	-- accel/accel.sh@21 -- # val=Yes
00:06:08.243   06:16:25	-- accel/accel.sh@22 -- # case "$var" in
00:06:08.243   06:16:25	-- accel/accel.sh@20 -- # IFS=:
00:06:08.243   06:16:25	-- accel/accel.sh@20 -- # read -r var val
00:06:08.243   06:16:25	-- accel/accel.sh@21 -- # val=
00:06:08.243   06:16:25	-- accel/accel.sh@22 -- # case "$var" in
00:06:08.243   06:16:25	-- accel/accel.sh@20 -- # IFS=:
00:06:08.243   06:16:25	-- accel/accel.sh@20 -- # read -r var val
00:06:08.243   06:16:25	-- accel/accel.sh@21 -- # val=
00:06:08.243   06:16:25	-- accel/accel.sh@22 -- # case "$var" in
00:06:08.243   06:16:25	-- accel/accel.sh@20 -- # IFS=:
00:06:08.243   06:16:25	-- accel/accel.sh@20 -- # read -r var val
00:06:09.178   06:16:26	-- accel/accel.sh@21 -- # val=
00:06:09.178   06:16:26	-- accel/accel.sh@22 -- # case "$var" in
00:06:09.178   06:16:26	-- accel/accel.sh@20 -- # IFS=:
00:06:09.178   06:16:26	-- accel/accel.sh@20 -- # read -r var val
00:06:09.178   06:16:26	-- accel/accel.sh@21 -- # val=
00:06:09.178   06:16:26	-- accel/accel.sh@22 -- # case "$var" in
00:06:09.178   06:16:26	-- accel/accel.sh@20 -- # IFS=:
00:06:09.178   06:16:26	-- accel/accel.sh@20 -- # read -r var val
00:06:09.178   06:16:26	-- accel/accel.sh@21 -- # val=
00:06:09.178   06:16:26	-- accel/accel.sh@22 -- # case "$var" in
00:06:09.178   06:16:26	-- accel/accel.sh@20 -- # IFS=:
00:06:09.178   06:16:26	-- accel/accel.sh@20 -- # read -r var val
00:06:09.178   06:16:26	-- accel/accel.sh@21 -- # val=
00:06:09.178   06:16:26	-- accel/accel.sh@22 -- # case "$var" in
00:06:09.178   06:16:26	-- accel/accel.sh@20 -- # IFS=:
00:06:09.178   06:16:26	-- accel/accel.sh@20 -- # read -r var val
00:06:09.178   06:16:26	-- accel/accel.sh@21 -- # val=
00:06:09.437   06:16:26	-- accel/accel.sh@22 -- # case "$var" in
00:06:09.437   06:16:26	-- accel/accel.sh@20 -- # IFS=:
00:06:09.437   06:16:26	-- accel/accel.sh@20 -- # read -r var val
00:06:09.437   06:16:26	-- accel/accel.sh@21 -- # val=
00:06:09.437   06:16:26	-- accel/accel.sh@22 -- # case "$var" in
00:06:09.437   06:16:26	-- accel/accel.sh@20 -- # IFS=:
00:06:09.437   06:16:26	-- accel/accel.sh@20 -- # read -r var val
00:06:09.437   06:16:26	-- accel/accel.sh@28 -- # [[ -n software ]]
00:06:09.437   06:16:26	-- accel/accel.sh@28 -- # [[ -n xor ]]
00:06:09.437   06:16:26	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:06:09.437  
00:06:09.437  real	0m2.915s
00:06:09.437  user	0m2.491s
00:06:09.437  sys	0m0.222s
00:06:09.437   06:16:26	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:09.437  ************************************
00:06:09.437  END TEST accel_xor
00:06:09.437  ************************************
00:06:09.437   06:16:26	-- common/autotest_common.sh@10 -- # set +x
00:06:09.437   06:16:26	-- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3
00:06:09.437   06:16:26	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:06:09.437   06:16:26	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:09.437   06:16:26	-- common/autotest_common.sh@10 -- # set +x
00:06:09.437  ************************************
00:06:09.437  START TEST accel_xor
00:06:09.437  ************************************
00:06:09.437   06:16:26	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3
00:06:09.437   06:16:26	-- accel/accel.sh@16 -- # local accel_opc
00:06:09.437   06:16:26	-- accel/accel.sh@17 -- # local accel_module
00:06:09.437    06:16:26	-- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3
00:06:09.437    06:16:26	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3
00:06:09.437     06:16:26	-- accel/accel.sh@12 -- # build_accel_config
00:06:09.437     06:16:26	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:09.437     06:16:26	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:09.437     06:16:26	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:09.437     06:16:26	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:09.437     06:16:26	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:09.437     06:16:26	-- accel/accel.sh@41 -- # local IFS=,
00:06:09.437     06:16:26	-- accel/accel.sh@42 -- # jq -r .
00:06:09.437  [2024-12-16 06:16:26.232211] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:09.437  [2024-12-16 06:16:26.232294] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58869 ]
00:06:09.437  [2024-12-16 06:16:26.363143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:09.696  [2024-12-16 06:16:26.431652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:11.073   06:16:27	-- accel/accel.sh@18 -- # out='
00:06:11.073  SPDK Configuration:
00:06:11.073  Core mask:      0x1
00:06:11.073  
00:06:11.073  Accel Perf Configuration:
00:06:11.073  Workload Type:  xor
00:06:11.073  Source buffers: 3
00:06:11.073  Transfer size:  4096 bytes
00:06:11.073  Vector count    1
00:06:11.073  Module:         software
00:06:11.073  Queue depth:    32
00:06:11.073  Allocate depth: 32
00:06:11.073  # threads/core: 1
00:06:11.073  Run time:       1 seconds
00:06:11.073  Verify:         Yes
00:06:11.073  
00:06:11.073  Running for 1 seconds...
00:06:11.073  
00:06:11.073  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:06:11.073  ------------------------------------------------------------------------------------
00:06:11.073  0,0                      280480/s       1095 MiB/s                0                0
00:06:11.073  ====================================================================================
00:06:11.073  Total                    280480/s       1095 MiB/s                0                0'
00:06:11.073   06:16:27	-- accel/accel.sh@20 -- # IFS=:
00:06:11.073    06:16:27	-- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3
00:06:11.073   06:16:27	-- accel/accel.sh@20 -- # read -r var val
00:06:11.073    06:16:27	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3
00:06:11.073     06:16:27	-- accel/accel.sh@12 -- # build_accel_config
00:06:11.073     06:16:27	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:11.073     06:16:27	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:11.073     06:16:27	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:11.073     06:16:27	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:11.073     06:16:27	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:11.073     06:16:27	-- accel/accel.sh@41 -- # local IFS=,
00:06:11.073     06:16:27	-- accel/accel.sh@42 -- # jq -r .
00:06:11.073  [2024-12-16 06:16:27.670667] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:11.074  [2024-12-16 06:16:27.670760] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58888 ]
00:06:11.074  [2024-12-16 06:16:27.806048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:11.074  [2024-12-16 06:16:27.892500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:11.074   06:16:27	-- accel/accel.sh@21 -- # val=
00:06:11.074   06:16:27	-- accel/accel.sh@22 -- # case "$var" in
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # IFS=:
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # read -r var val
00:06:11.074   06:16:27	-- accel/accel.sh@21 -- # val=
00:06:11.074   06:16:27	-- accel/accel.sh@22 -- # case "$var" in
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # IFS=:
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # read -r var val
00:06:11.074   06:16:27	-- accel/accel.sh@21 -- # val=0x1
00:06:11.074   06:16:27	-- accel/accel.sh@22 -- # case "$var" in
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # IFS=:
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # read -r var val
00:06:11.074   06:16:27	-- accel/accel.sh@21 -- # val=
00:06:11.074   06:16:27	-- accel/accel.sh@22 -- # case "$var" in
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # IFS=:
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # read -r var val
00:06:11.074   06:16:27	-- accel/accel.sh@21 -- # val=
00:06:11.074   06:16:27	-- accel/accel.sh@22 -- # case "$var" in
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # IFS=:
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # read -r var val
00:06:11.074   06:16:27	-- accel/accel.sh@21 -- # val=xor
00:06:11.074   06:16:27	-- accel/accel.sh@22 -- # case "$var" in
00:06:11.074   06:16:27	-- accel/accel.sh@24 -- # accel_opc=xor
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # IFS=:
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # read -r var val
00:06:11.074   06:16:27	-- accel/accel.sh@21 -- # val=3
00:06:11.074   06:16:27	-- accel/accel.sh@22 -- # case "$var" in
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # IFS=:
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # read -r var val
00:06:11.074   06:16:27	-- accel/accel.sh@21 -- # val='4096 bytes'
00:06:11.074   06:16:27	-- accel/accel.sh@22 -- # case "$var" in
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # IFS=:
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # read -r var val
00:06:11.074   06:16:27	-- accel/accel.sh@21 -- # val=
00:06:11.074   06:16:27	-- accel/accel.sh@22 -- # case "$var" in
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # IFS=:
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # read -r var val
00:06:11.074   06:16:27	-- accel/accel.sh@21 -- # val=software
00:06:11.074   06:16:27	-- accel/accel.sh@22 -- # case "$var" in
00:06:11.074   06:16:27	-- accel/accel.sh@23 -- # accel_module=software
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # IFS=:
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # read -r var val
00:06:11.074   06:16:27	-- accel/accel.sh@21 -- # val=32
00:06:11.074   06:16:27	-- accel/accel.sh@22 -- # case "$var" in
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # IFS=:
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # read -r var val
00:06:11.074   06:16:27	-- accel/accel.sh@21 -- # val=32
00:06:11.074   06:16:27	-- accel/accel.sh@22 -- # case "$var" in
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # IFS=:
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # read -r var val
00:06:11.074   06:16:27	-- accel/accel.sh@21 -- # val=1
00:06:11.074   06:16:27	-- accel/accel.sh@22 -- # case "$var" in
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # IFS=:
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # read -r var val
00:06:11.074   06:16:27	-- accel/accel.sh@21 -- # val='1 seconds'
00:06:11.074   06:16:27	-- accel/accel.sh@22 -- # case "$var" in
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # IFS=:
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # read -r var val
00:06:11.074   06:16:27	-- accel/accel.sh@21 -- # val=Yes
00:06:11.074   06:16:27	-- accel/accel.sh@22 -- # case "$var" in
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # IFS=:
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # read -r var val
00:06:11.074   06:16:27	-- accel/accel.sh@21 -- # val=
00:06:11.074   06:16:27	-- accel/accel.sh@22 -- # case "$var" in
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # IFS=:
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # read -r var val
00:06:11.074   06:16:27	-- accel/accel.sh@21 -- # val=
00:06:11.074   06:16:27	-- accel/accel.sh@22 -- # case "$var" in
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # IFS=:
00:06:11.074   06:16:27	-- accel/accel.sh@20 -- # read -r var val
00:06:12.450   06:16:29	-- accel/accel.sh@21 -- # val=
00:06:12.450   06:16:29	-- accel/accel.sh@22 -- # case "$var" in
00:06:12.450   06:16:29	-- accel/accel.sh@20 -- # IFS=:
00:06:12.450   06:16:29	-- accel/accel.sh@20 -- # read -r var val
00:06:12.450   06:16:29	-- accel/accel.sh@21 -- # val=
00:06:12.450   06:16:29	-- accel/accel.sh@22 -- # case "$var" in
00:06:12.450   06:16:29	-- accel/accel.sh@20 -- # IFS=:
00:06:12.450   06:16:29	-- accel/accel.sh@20 -- # read -r var val
00:06:12.450   06:16:29	-- accel/accel.sh@21 -- # val=
00:06:12.450   06:16:29	-- accel/accel.sh@22 -- # case "$var" in
00:06:12.450   06:16:29	-- accel/accel.sh@20 -- # IFS=:
00:06:12.450   06:16:29	-- accel/accel.sh@20 -- # read -r var val
00:06:12.450   06:16:29	-- accel/accel.sh@21 -- # val=
00:06:12.450   06:16:29	-- accel/accel.sh@22 -- # case "$var" in
00:06:12.450   06:16:29	-- accel/accel.sh@20 -- # IFS=:
00:06:12.450   06:16:29	-- accel/accel.sh@20 -- # read -r var val
00:06:12.450   06:16:29	-- accel/accel.sh@21 -- # val=
00:06:12.450   06:16:29	-- accel/accel.sh@22 -- # case "$var" in
00:06:12.450   06:16:29	-- accel/accel.sh@20 -- # IFS=:
00:06:12.450   06:16:29	-- accel/accel.sh@20 -- # read -r var val
00:06:12.450   06:16:29	-- accel/accel.sh@21 -- # val=
00:06:12.450   06:16:29	-- accel/accel.sh@22 -- # case "$var" in
00:06:12.450   06:16:29	-- accel/accel.sh@20 -- # IFS=:
00:06:12.450   06:16:29	-- accel/accel.sh@20 -- # read -r var val
00:06:12.450   06:16:29	-- accel/accel.sh@28 -- # [[ -n software ]]
00:06:12.450   06:16:29	-- accel/accel.sh@28 -- # [[ -n xor ]]
00:06:12.450   06:16:29	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:06:12.450  
00:06:12.450  real	0m2.901s
00:06:12.450  user	0m2.494s
00:06:12.450  sys	0m0.207s
00:06:12.450  ************************************
00:06:12.450  END TEST accel_xor
00:06:12.450   06:16:29	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:12.450   06:16:29	-- common/autotest_common.sh@10 -- # set +x
00:06:12.450  ************************************
00:06:12.450   06:16:29	-- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify
00:06:12.450   06:16:29	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:06:12.450   06:16:29	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:12.450   06:16:29	-- common/autotest_common.sh@10 -- # set +x
00:06:12.450  ************************************
00:06:12.450  START TEST accel_dif_verify
00:06:12.450  ************************************
00:06:12.450   06:16:29	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify
00:06:12.450   06:16:29	-- accel/accel.sh@16 -- # local accel_opc
00:06:12.450   06:16:29	-- accel/accel.sh@17 -- # local accel_module
00:06:12.450    06:16:29	-- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify
00:06:12.451    06:16:29	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify
00:06:12.451     06:16:29	-- accel/accel.sh@12 -- # build_accel_config
00:06:12.451     06:16:29	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:12.451     06:16:29	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:12.451     06:16:29	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:12.451     06:16:29	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:12.451     06:16:29	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:12.451     06:16:29	-- accel/accel.sh@41 -- # local IFS=,
00:06:12.451     06:16:29	-- accel/accel.sh@42 -- # jq -r .
00:06:12.451  [2024-12-16 06:16:29.188550] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:12.451  [2024-12-16 06:16:29.188786] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58923 ]
00:06:12.451  [2024-12-16 06:16:29.323536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:12.451  [2024-12-16 06:16:29.393337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:13.827   06:16:30	-- accel/accel.sh@18 -- # out='
00:06:13.827  SPDK Configuration:
00:06:13.827  Core mask:      0x1
00:06:13.827  
00:06:13.827  Accel Perf Configuration:
00:06:13.827  Workload Type:  dif_verify
00:06:13.827  Vector size:    4096 bytes
00:06:13.827  Transfer size:  4096 bytes
00:06:13.827  Block size:     512 bytes
00:06:13.827  Metadata size:  8 bytes
00:06:13.827  Vector count    1
00:06:13.827  Module:         software
00:06:13.827  Queue depth:    32
00:06:13.827  Allocate depth: 32
00:06:13.827  # threads/core: 1
00:06:13.827  Run time:       1 seconds
00:06:13.827  Verify:         No
00:06:13.827  
00:06:13.827  Running for 1 seconds...
00:06:13.827  
00:06:13.827  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:06:13.827  ------------------------------------------------------------------------------------
00:06:13.827  0,0                      121760/s        483 MiB/s                0                0
00:06:13.827  ====================================================================================
00:06:13.827  Total                    121760/s        475 MiB/s                0                0'
00:06:13.827   06:16:30	-- accel/accel.sh@20 -- # IFS=:
00:06:13.827    06:16:30	-- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify
00:06:13.827   06:16:30	-- accel/accel.sh@20 -- # read -r var val
00:06:13.827     06:16:30	-- accel/accel.sh@12 -- # build_accel_config
00:06:13.827    06:16:30	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify
00:06:13.827     06:16:30	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:13.827     06:16:30	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:13.827     06:16:30	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:13.827     06:16:30	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:13.827     06:16:30	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:13.827     06:16:30	-- accel/accel.sh@41 -- # local IFS=,
00:06:13.827     06:16:30	-- accel/accel.sh@42 -- # jq -r .
00:06:13.827  [2024-12-16 06:16:30.639806] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:13.827  [2024-12-16 06:16:30.640052] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58942 ]
00:06:13.827  [2024-12-16 06:16:30.774381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:14.086  [2024-12-16 06:16:30.841595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:14.086   06:16:30	-- accel/accel.sh@21 -- # val=
00:06:14.086   06:16:30	-- accel/accel.sh@22 -- # case "$var" in
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # IFS=:
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # read -r var val
00:06:14.086   06:16:30	-- accel/accel.sh@21 -- # val=
00:06:14.086   06:16:30	-- accel/accel.sh@22 -- # case "$var" in
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # IFS=:
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # read -r var val
00:06:14.086   06:16:30	-- accel/accel.sh@21 -- # val=0x1
00:06:14.086   06:16:30	-- accel/accel.sh@22 -- # case "$var" in
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # IFS=:
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # read -r var val
00:06:14.086   06:16:30	-- accel/accel.sh@21 -- # val=
00:06:14.086   06:16:30	-- accel/accel.sh@22 -- # case "$var" in
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # IFS=:
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # read -r var val
00:06:14.086   06:16:30	-- accel/accel.sh@21 -- # val=
00:06:14.086   06:16:30	-- accel/accel.sh@22 -- # case "$var" in
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # IFS=:
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # read -r var val
00:06:14.086   06:16:30	-- accel/accel.sh@21 -- # val=dif_verify
00:06:14.086   06:16:30	-- accel/accel.sh@22 -- # case "$var" in
00:06:14.086   06:16:30	-- accel/accel.sh@24 -- # accel_opc=dif_verify
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # IFS=:
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # read -r var val
00:06:14.086   06:16:30	-- accel/accel.sh@21 -- # val='4096 bytes'
00:06:14.086   06:16:30	-- accel/accel.sh@22 -- # case "$var" in
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # IFS=:
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # read -r var val
00:06:14.086   06:16:30	-- accel/accel.sh@21 -- # val='4096 bytes'
00:06:14.086   06:16:30	-- accel/accel.sh@22 -- # case "$var" in
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # IFS=:
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # read -r var val
00:06:14.086   06:16:30	-- accel/accel.sh@21 -- # val='512 bytes'
00:06:14.086   06:16:30	-- accel/accel.sh@22 -- # case "$var" in
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # IFS=:
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # read -r var val
00:06:14.086   06:16:30	-- accel/accel.sh@21 -- # val='8 bytes'
00:06:14.086   06:16:30	-- accel/accel.sh@22 -- # case "$var" in
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # IFS=:
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # read -r var val
00:06:14.086   06:16:30	-- accel/accel.sh@21 -- # val=
00:06:14.086   06:16:30	-- accel/accel.sh@22 -- # case "$var" in
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # IFS=:
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # read -r var val
00:06:14.086   06:16:30	-- accel/accel.sh@21 -- # val=software
00:06:14.086   06:16:30	-- accel/accel.sh@22 -- # case "$var" in
00:06:14.086   06:16:30	-- accel/accel.sh@23 -- # accel_module=software
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # IFS=:
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # read -r var val
00:06:14.086   06:16:30	-- accel/accel.sh@21 -- # val=32
00:06:14.086   06:16:30	-- accel/accel.sh@22 -- # case "$var" in
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # IFS=:
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # read -r var val
00:06:14.086   06:16:30	-- accel/accel.sh@21 -- # val=32
00:06:14.086   06:16:30	-- accel/accel.sh@22 -- # case "$var" in
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # IFS=:
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # read -r var val
00:06:14.086   06:16:30	-- accel/accel.sh@21 -- # val=1
00:06:14.086   06:16:30	-- accel/accel.sh@22 -- # case "$var" in
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # IFS=:
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # read -r var val
00:06:14.086   06:16:30	-- accel/accel.sh@21 -- # val='1 seconds'
00:06:14.086   06:16:30	-- accel/accel.sh@22 -- # case "$var" in
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # IFS=:
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # read -r var val
00:06:14.086   06:16:30	-- accel/accel.sh@21 -- # val=No
00:06:14.086   06:16:30	-- accel/accel.sh@22 -- # case "$var" in
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # IFS=:
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # read -r var val
00:06:14.086   06:16:30	-- accel/accel.sh@21 -- # val=
00:06:14.086   06:16:30	-- accel/accel.sh@22 -- # case "$var" in
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # IFS=:
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # read -r var val
00:06:14.086   06:16:30	-- accel/accel.sh@21 -- # val=
00:06:14.086   06:16:30	-- accel/accel.sh@22 -- # case "$var" in
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # IFS=:
00:06:14.086   06:16:30	-- accel/accel.sh@20 -- # read -r var val
00:06:15.467   06:16:32	-- accel/accel.sh@21 -- # val=
00:06:15.467   06:16:32	-- accel/accel.sh@22 -- # case "$var" in
00:06:15.467   06:16:32	-- accel/accel.sh@20 -- # IFS=:
00:06:15.467   06:16:32	-- accel/accel.sh@20 -- # read -r var val
00:06:15.467   06:16:32	-- accel/accel.sh@21 -- # val=
00:06:15.467   06:16:32	-- accel/accel.sh@22 -- # case "$var" in
00:06:15.467   06:16:32	-- accel/accel.sh@20 -- # IFS=:
00:06:15.467   06:16:32	-- accel/accel.sh@20 -- # read -r var val
00:06:15.467   06:16:32	-- accel/accel.sh@21 -- # val=
00:06:15.467   06:16:32	-- accel/accel.sh@22 -- # case "$var" in
00:06:15.467   06:16:32	-- accel/accel.sh@20 -- # IFS=:
00:06:15.467   06:16:32	-- accel/accel.sh@20 -- # read -r var val
00:06:15.467   06:16:32	-- accel/accel.sh@21 -- # val=
00:06:15.467   06:16:32	-- accel/accel.sh@22 -- # case "$var" in
00:06:15.467   06:16:32	-- accel/accel.sh@20 -- # IFS=:
00:06:15.467   06:16:32	-- accel/accel.sh@20 -- # read -r var val
00:06:15.467   06:16:32	-- accel/accel.sh@21 -- # val=
00:06:15.467   06:16:32	-- accel/accel.sh@22 -- # case "$var" in
00:06:15.467   06:16:32	-- accel/accel.sh@20 -- # IFS=:
00:06:15.467   06:16:32	-- accel/accel.sh@20 -- # read -r var val
00:06:15.467   06:16:32	-- accel/accel.sh@21 -- # val=
00:06:15.467  ************************************
00:06:15.467  END TEST accel_dif_verify
00:06:15.467  ************************************
00:06:15.467   06:16:32	-- accel/accel.sh@22 -- # case "$var" in
00:06:15.467   06:16:32	-- accel/accel.sh@20 -- # IFS=:
00:06:15.467   06:16:32	-- accel/accel.sh@20 -- # read -r var val
00:06:15.467   06:16:32	-- accel/accel.sh@28 -- # [[ -n software ]]
00:06:15.467   06:16:32	-- accel/accel.sh@28 -- # [[ -n dif_verify ]]
00:06:15.467   06:16:32	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:06:15.467  
00:06:15.467  real	0m2.899s
00:06:15.467  user	0m2.489s
00:06:15.467  sys	0m0.209s
00:06:15.467   06:16:32	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:15.467   06:16:32	-- common/autotest_common.sh@10 -- # set +x
00:06:15.467   06:16:32	-- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate
00:06:15.467   06:16:32	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:06:15.467   06:16:32	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:15.467   06:16:32	-- common/autotest_common.sh@10 -- # set +x
00:06:15.467  ************************************
00:06:15.467  START TEST accel_dif_generate
00:06:15.467  ************************************
00:06:15.467   06:16:32	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate
00:06:15.467   06:16:32	-- accel/accel.sh@16 -- # local accel_opc
00:06:15.467   06:16:32	-- accel/accel.sh@17 -- # local accel_module
00:06:15.467    06:16:32	-- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate
00:06:15.467    06:16:32	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate
00:06:15.467     06:16:32	-- accel/accel.sh@12 -- # build_accel_config
00:06:15.467     06:16:32	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:15.467     06:16:32	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:15.467     06:16:32	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:15.467     06:16:32	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:15.467     06:16:32	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:15.467     06:16:32	-- accel/accel.sh@41 -- # local IFS=,
00:06:15.467     06:16:32	-- accel/accel.sh@42 -- # jq -r .
00:06:15.467  [2024-12-16 06:16:32.144222] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:15.467  [2024-12-16 06:16:32.144501] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58977 ]
00:06:15.467  [2024-12-16 06:16:32.280495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:15.467  [2024-12-16 06:16:32.346091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:16.843   06:16:33	-- accel/accel.sh@18 -- # out='
00:06:16.843  SPDK Configuration:
00:06:16.843  Core mask:      0x1
00:06:16.843  
00:06:16.843  Accel Perf Configuration:
00:06:16.843  Workload Type:  dif_generate
00:06:16.843  Vector size:    4096 bytes
00:06:16.843  Transfer size:  4096 bytes
00:06:16.843  Block size:     512 bytes
00:06:16.843  Metadata size:  8 bytes
00:06:16.843  Vector count    1
00:06:16.843  Module:         software
00:06:16.843  Queue depth:    32
00:06:16.843  Allocate depth: 32
00:06:16.843  # threads/core: 1
00:06:16.843  Run time:       1 seconds
00:06:16.843  Verify:         No
00:06:16.843  
00:06:16.843  Running for 1 seconds...
00:06:16.843  
00:06:16.843  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:06:16.843  ------------------------------------------------------------------------------------
00:06:16.843  0,0                      147008/s        583 MiB/s                0                0
00:06:16.843  ====================================================================================
00:06:16.843  Total                    147008/s        574 MiB/s                0                0'
00:06:16.843   06:16:33	-- accel/accel.sh@20 -- # IFS=:
00:06:16.843    06:16:33	-- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate
00:06:16.843   06:16:33	-- accel/accel.sh@20 -- # read -r var val
00:06:16.843     06:16:33	-- accel/accel.sh@12 -- # build_accel_config
00:06:16.843    06:16:33	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate
00:06:16.843     06:16:33	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:16.843     06:16:33	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:16.843     06:16:33	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:16.843     06:16:33	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:16.843     06:16:33	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:16.843     06:16:33	-- accel/accel.sh@41 -- # local IFS=,
00:06:16.843     06:16:33	-- accel/accel.sh@42 -- # jq -r .
00:06:16.843  [2024-12-16 06:16:33.596499] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:16.843  [2024-12-16 06:16:33.596621] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58991 ]
00:06:16.843  [2024-12-16 06:16:33.731874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:16.843  [2024-12-16 06:16:33.809821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:17.102   06:16:33	-- accel/accel.sh@21 -- # val=
00:06:17.102   06:16:33	-- accel/accel.sh@22 -- # case "$var" in
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # IFS=:
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # read -r var val
00:06:17.102   06:16:33	-- accel/accel.sh@21 -- # val=
00:06:17.102   06:16:33	-- accel/accel.sh@22 -- # case "$var" in
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # IFS=:
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # read -r var val
00:06:17.102   06:16:33	-- accel/accel.sh@21 -- # val=0x1
00:06:17.102   06:16:33	-- accel/accel.sh@22 -- # case "$var" in
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # IFS=:
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # read -r var val
00:06:17.102   06:16:33	-- accel/accel.sh@21 -- # val=
00:06:17.102   06:16:33	-- accel/accel.sh@22 -- # case "$var" in
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # IFS=:
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # read -r var val
00:06:17.102   06:16:33	-- accel/accel.sh@21 -- # val=
00:06:17.102   06:16:33	-- accel/accel.sh@22 -- # case "$var" in
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # IFS=:
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # read -r var val
00:06:17.102   06:16:33	-- accel/accel.sh@21 -- # val=dif_generate
00:06:17.102   06:16:33	-- accel/accel.sh@22 -- # case "$var" in
00:06:17.102   06:16:33	-- accel/accel.sh@24 -- # accel_opc=dif_generate
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # IFS=:
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # read -r var val
00:06:17.102   06:16:33	-- accel/accel.sh@21 -- # val='4096 bytes'
00:06:17.102   06:16:33	-- accel/accel.sh@22 -- # case "$var" in
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # IFS=:
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # read -r var val
00:06:17.102   06:16:33	-- accel/accel.sh@21 -- # val='4096 bytes'
00:06:17.102   06:16:33	-- accel/accel.sh@22 -- # case "$var" in
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # IFS=:
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # read -r var val
00:06:17.102   06:16:33	-- accel/accel.sh@21 -- # val='512 bytes'
00:06:17.102   06:16:33	-- accel/accel.sh@22 -- # case "$var" in
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # IFS=:
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # read -r var val
00:06:17.102   06:16:33	-- accel/accel.sh@21 -- # val='8 bytes'
00:06:17.102   06:16:33	-- accel/accel.sh@22 -- # case "$var" in
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # IFS=:
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # read -r var val
00:06:17.102   06:16:33	-- accel/accel.sh@21 -- # val=
00:06:17.102   06:16:33	-- accel/accel.sh@22 -- # case "$var" in
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # IFS=:
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # read -r var val
00:06:17.102   06:16:33	-- accel/accel.sh@21 -- # val=software
00:06:17.102   06:16:33	-- accel/accel.sh@22 -- # case "$var" in
00:06:17.102   06:16:33	-- accel/accel.sh@23 -- # accel_module=software
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # IFS=:
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # read -r var val
00:06:17.102   06:16:33	-- accel/accel.sh@21 -- # val=32
00:06:17.102   06:16:33	-- accel/accel.sh@22 -- # case "$var" in
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # IFS=:
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # read -r var val
00:06:17.102   06:16:33	-- accel/accel.sh@21 -- # val=32
00:06:17.102   06:16:33	-- accel/accel.sh@22 -- # case "$var" in
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # IFS=:
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # read -r var val
00:06:17.102   06:16:33	-- accel/accel.sh@21 -- # val=1
00:06:17.102   06:16:33	-- accel/accel.sh@22 -- # case "$var" in
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # IFS=:
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # read -r var val
00:06:17.102   06:16:33	-- accel/accel.sh@21 -- # val='1 seconds'
00:06:17.102   06:16:33	-- accel/accel.sh@22 -- # case "$var" in
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # IFS=:
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # read -r var val
00:06:17.102   06:16:33	-- accel/accel.sh@21 -- # val=No
00:06:17.102   06:16:33	-- accel/accel.sh@22 -- # case "$var" in
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # IFS=:
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # read -r var val
00:06:17.102   06:16:33	-- accel/accel.sh@21 -- # val=
00:06:17.102   06:16:33	-- accel/accel.sh@22 -- # case "$var" in
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # IFS=:
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # read -r var val
00:06:17.102   06:16:33	-- accel/accel.sh@21 -- # val=
00:06:17.102   06:16:33	-- accel/accel.sh@22 -- # case "$var" in
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # IFS=:
00:06:17.102   06:16:33	-- accel/accel.sh@20 -- # read -r var val
00:06:18.479   06:16:35	-- accel/accel.sh@21 -- # val=
00:06:18.479   06:16:35	-- accel/accel.sh@22 -- # case "$var" in
00:06:18.479   06:16:35	-- accel/accel.sh@20 -- # IFS=:
00:06:18.479   06:16:35	-- accel/accel.sh@20 -- # read -r var val
00:06:18.479   06:16:35	-- accel/accel.sh@21 -- # val=
00:06:18.479   06:16:35	-- accel/accel.sh@22 -- # case "$var" in
00:06:18.479   06:16:35	-- accel/accel.sh@20 -- # IFS=:
00:06:18.479   06:16:35	-- accel/accel.sh@20 -- # read -r var val
00:06:18.479   06:16:35	-- accel/accel.sh@21 -- # val=
00:06:18.479   06:16:35	-- accel/accel.sh@22 -- # case "$var" in
00:06:18.479   06:16:35	-- accel/accel.sh@20 -- # IFS=:
00:06:18.479   06:16:35	-- accel/accel.sh@20 -- # read -r var val
00:06:18.479   06:16:35	-- accel/accel.sh@21 -- # val=
00:06:18.479  ************************************
00:06:18.479  END TEST accel_dif_generate
00:06:18.479  ************************************
00:06:18.479   06:16:35	-- accel/accel.sh@22 -- # case "$var" in
00:06:18.479   06:16:35	-- accel/accel.sh@20 -- # IFS=:
00:06:18.479   06:16:35	-- accel/accel.sh@20 -- # read -r var val
00:06:18.479   06:16:35	-- accel/accel.sh@21 -- # val=
00:06:18.479   06:16:35	-- accel/accel.sh@22 -- # case "$var" in
00:06:18.479   06:16:35	-- accel/accel.sh@20 -- # IFS=:
00:06:18.479   06:16:35	-- accel/accel.sh@20 -- # read -r var val
00:06:18.479   06:16:35	-- accel/accel.sh@21 -- # val=
00:06:18.479   06:16:35	-- accel/accel.sh@22 -- # case "$var" in
00:06:18.479   06:16:35	-- accel/accel.sh@20 -- # IFS=:
00:06:18.479   06:16:35	-- accel/accel.sh@20 -- # read -r var val
00:06:18.479   06:16:35	-- accel/accel.sh@28 -- # [[ -n software ]]
00:06:18.479   06:16:35	-- accel/accel.sh@28 -- # [[ -n dif_generate ]]
00:06:18.479   06:16:35	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:06:18.479  
00:06:18.479  real	0m2.922s
00:06:18.479  user	0m2.507s
00:06:18.479  sys	0m0.216s
00:06:18.479   06:16:35	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:18.479   06:16:35	-- common/autotest_common.sh@10 -- # set +x
00:06:18.479   06:16:35	-- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy
00:06:18.479   06:16:35	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:06:18.479   06:16:35	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:18.479   06:16:35	-- common/autotest_common.sh@10 -- # set +x
00:06:18.479  ************************************
00:06:18.479  START TEST accel_dif_generate_copy
00:06:18.479  ************************************
00:06:18.479   06:16:35	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy
00:06:18.479   06:16:35	-- accel/accel.sh@16 -- # local accel_opc
00:06:18.479   06:16:35	-- accel/accel.sh@17 -- # local accel_module
00:06:18.479    06:16:35	-- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy
00:06:18.479    06:16:35	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy
00:06:18.480     06:16:35	-- accel/accel.sh@12 -- # build_accel_config
00:06:18.480     06:16:35	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:18.480     06:16:35	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:18.480     06:16:35	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:18.480     06:16:35	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:18.480     06:16:35	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:18.480     06:16:35	-- accel/accel.sh@41 -- # local IFS=,
00:06:18.480     06:16:35	-- accel/accel.sh@42 -- # jq -r .
00:06:18.480  [2024-12-16 06:16:35.115668] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:18.480  [2024-12-16 06:16:35.115754] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59031 ]
00:06:18.480  [2024-12-16 06:16:35.243027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:18.480  [2024-12-16 06:16:35.310478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:19.856   06:16:36	-- accel/accel.sh@18 -- # out='
00:06:19.856  SPDK Configuration:
00:06:19.856  Core mask:      0x1
00:06:19.857  
00:06:19.857  Accel Perf Configuration:
00:06:19.857  Workload Type:  dif_generate_copy
00:06:19.857  Vector size:    4096 bytes
00:06:19.857  Transfer size:  4096 bytes
00:06:19.857  Vector count    1
00:06:19.857  Module:         software
00:06:19.857  Queue depth:    32
00:06:19.857  Allocate depth: 32
00:06:19.857  # threads/core: 1
00:06:19.857  Run time:       1 seconds
00:06:19.857  Verify:         No
00:06:19.857  
00:06:19.857  Running for 1 seconds...
00:06:19.857  
00:06:19.857  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:06:19.857  ------------------------------------------------------------------------------------
00:06:19.857  0,0                      115072/s        456 MiB/s                0                0
00:06:19.857  ====================================================================================
00:06:19.857  Total                    115072/s        449 MiB/s                0                0'
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # IFS=:
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # read -r var val
00:06:19.857    06:16:36	-- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy
00:06:19.857     06:16:36	-- accel/accel.sh@12 -- # build_accel_config
00:06:19.857    06:16:36	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy
00:06:19.857     06:16:36	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:19.857     06:16:36	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:19.857     06:16:36	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:19.857     06:16:36	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:19.857     06:16:36	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:19.857     06:16:36	-- accel/accel.sh@41 -- # local IFS=,
00:06:19.857     06:16:36	-- accel/accel.sh@42 -- # jq -r .
00:06:19.857  [2024-12-16 06:16:36.544243] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:19.857  [2024-12-16 06:16:36.544712] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59045 ]
00:06:19.857  [2024-12-16 06:16:36.676748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:19.857  [2024-12-16 06:16:36.739686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:19.857   06:16:36	-- accel/accel.sh@21 -- # val=
00:06:19.857   06:16:36	-- accel/accel.sh@22 -- # case "$var" in
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # IFS=:
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # read -r var val
00:06:19.857   06:16:36	-- accel/accel.sh@21 -- # val=
00:06:19.857   06:16:36	-- accel/accel.sh@22 -- # case "$var" in
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # IFS=:
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # read -r var val
00:06:19.857   06:16:36	-- accel/accel.sh@21 -- # val=0x1
00:06:19.857   06:16:36	-- accel/accel.sh@22 -- # case "$var" in
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # IFS=:
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # read -r var val
00:06:19.857   06:16:36	-- accel/accel.sh@21 -- # val=
00:06:19.857   06:16:36	-- accel/accel.sh@22 -- # case "$var" in
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # IFS=:
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # read -r var val
00:06:19.857   06:16:36	-- accel/accel.sh@21 -- # val=
00:06:19.857   06:16:36	-- accel/accel.sh@22 -- # case "$var" in
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # IFS=:
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # read -r var val
00:06:19.857   06:16:36	-- accel/accel.sh@21 -- # val=dif_generate_copy
00:06:19.857   06:16:36	-- accel/accel.sh@22 -- # case "$var" in
00:06:19.857   06:16:36	-- accel/accel.sh@24 -- # accel_opc=dif_generate_copy
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # IFS=:
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # read -r var val
00:06:19.857   06:16:36	-- accel/accel.sh@21 -- # val='4096 bytes'
00:06:19.857   06:16:36	-- accel/accel.sh@22 -- # case "$var" in
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # IFS=:
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # read -r var val
00:06:19.857   06:16:36	-- accel/accel.sh@21 -- # val='4096 bytes'
00:06:19.857   06:16:36	-- accel/accel.sh@22 -- # case "$var" in
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # IFS=:
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # read -r var val
00:06:19.857   06:16:36	-- accel/accel.sh@21 -- # val=
00:06:19.857   06:16:36	-- accel/accel.sh@22 -- # case "$var" in
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # IFS=:
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # read -r var val
00:06:19.857   06:16:36	-- accel/accel.sh@21 -- # val=software
00:06:19.857   06:16:36	-- accel/accel.sh@22 -- # case "$var" in
00:06:19.857   06:16:36	-- accel/accel.sh@23 -- # accel_module=software
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # IFS=:
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # read -r var val
00:06:19.857   06:16:36	-- accel/accel.sh@21 -- # val=32
00:06:19.857   06:16:36	-- accel/accel.sh@22 -- # case "$var" in
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # IFS=:
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # read -r var val
00:06:19.857   06:16:36	-- accel/accel.sh@21 -- # val=32
00:06:19.857   06:16:36	-- accel/accel.sh@22 -- # case "$var" in
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # IFS=:
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # read -r var val
00:06:19.857   06:16:36	-- accel/accel.sh@21 -- # val=1
00:06:19.857   06:16:36	-- accel/accel.sh@22 -- # case "$var" in
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # IFS=:
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # read -r var val
00:06:19.857   06:16:36	-- accel/accel.sh@21 -- # val='1 seconds'
00:06:19.857   06:16:36	-- accel/accel.sh@22 -- # case "$var" in
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # IFS=:
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # read -r var val
00:06:19.857   06:16:36	-- accel/accel.sh@21 -- # val=No
00:06:19.857   06:16:36	-- accel/accel.sh@22 -- # case "$var" in
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # IFS=:
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # read -r var val
00:06:19.857   06:16:36	-- accel/accel.sh@21 -- # val=
00:06:19.857   06:16:36	-- accel/accel.sh@22 -- # case "$var" in
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # IFS=:
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # read -r var val
00:06:19.857   06:16:36	-- accel/accel.sh@21 -- # val=
00:06:19.857   06:16:36	-- accel/accel.sh@22 -- # case "$var" in
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # IFS=:
00:06:19.857   06:16:36	-- accel/accel.sh@20 -- # read -r var val
00:06:21.233   06:16:37	-- accel/accel.sh@21 -- # val=
00:06:21.233   06:16:37	-- accel/accel.sh@22 -- # case "$var" in
00:06:21.233   06:16:37	-- accel/accel.sh@20 -- # IFS=:
00:06:21.233   06:16:37	-- accel/accel.sh@20 -- # read -r var val
00:06:21.233   06:16:37	-- accel/accel.sh@21 -- # val=
00:06:21.233   06:16:37	-- accel/accel.sh@22 -- # case "$var" in
00:06:21.233   06:16:37	-- accel/accel.sh@20 -- # IFS=:
00:06:21.233   06:16:37	-- accel/accel.sh@20 -- # read -r var val
00:06:21.233   06:16:37	-- accel/accel.sh@21 -- # val=
00:06:21.233   06:16:37	-- accel/accel.sh@22 -- # case "$var" in
00:06:21.233   06:16:37	-- accel/accel.sh@20 -- # IFS=:
00:06:21.233   06:16:37	-- accel/accel.sh@20 -- # read -r var val
00:06:21.233   06:16:37	-- accel/accel.sh@21 -- # val=
00:06:21.233   06:16:37	-- accel/accel.sh@22 -- # case "$var" in
00:06:21.233   06:16:37	-- accel/accel.sh@20 -- # IFS=:
00:06:21.233   06:16:37	-- accel/accel.sh@20 -- # read -r var val
00:06:21.233  ************************************
00:06:21.233  END TEST accel_dif_generate_copy
00:06:21.233  ************************************
00:06:21.233   06:16:37	-- accel/accel.sh@21 -- # val=
00:06:21.233   06:16:37	-- accel/accel.sh@22 -- # case "$var" in
00:06:21.233   06:16:37	-- accel/accel.sh@20 -- # IFS=:
00:06:21.233   06:16:37	-- accel/accel.sh@20 -- # read -r var val
00:06:21.233   06:16:37	-- accel/accel.sh@21 -- # val=
00:06:21.233   06:16:37	-- accel/accel.sh@22 -- # case "$var" in
00:06:21.233   06:16:37	-- accel/accel.sh@20 -- # IFS=:
00:06:21.233   06:16:37	-- accel/accel.sh@20 -- # read -r var val
00:06:21.233   06:16:37	-- accel/accel.sh@28 -- # [[ -n software ]]
00:06:21.233   06:16:37	-- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]]
00:06:21.233   06:16:37	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:06:21.233  
00:06:21.233  real	0m2.872s
00:06:21.233  user	0m2.471s
00:06:21.233  sys	0m0.196s
00:06:21.233   06:16:37	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:21.233   06:16:37	-- common/autotest_common.sh@10 -- # set +x
00:06:21.233   06:16:38	-- accel/accel.sh@107 -- # [[ y == y ]]
00:06:21.233   06:16:38	-- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib
00:06:21.233   06:16:38	-- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']'
00:06:21.233   06:16:38	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:21.233   06:16:38	-- common/autotest_common.sh@10 -- # set +x
00:06:21.233  ************************************
00:06:21.233  START TEST accel_comp
00:06:21.233  ************************************
00:06:21.233   06:16:38	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib
00:06:21.233   06:16:38	-- accel/accel.sh@16 -- # local accel_opc
00:06:21.233   06:16:38	-- accel/accel.sh@17 -- # local accel_module
00:06:21.233    06:16:38	-- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib
00:06:21.233    06:16:38	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib
00:06:21.233     06:16:38	-- accel/accel.sh@12 -- # build_accel_config
00:06:21.233     06:16:38	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:21.233     06:16:38	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:21.233     06:16:38	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:21.233     06:16:38	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:21.233     06:16:38	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:21.233     06:16:38	-- accel/accel.sh@41 -- # local IFS=,
00:06:21.233     06:16:38	-- accel/accel.sh@42 -- # jq -r .
00:06:21.233  [2024-12-16 06:16:38.044102] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:21.233  [2024-12-16 06:16:38.044192] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59079 ]
00:06:21.233  [2024-12-16 06:16:38.178521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:21.492  [2024-12-16 06:16:38.243679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:22.868   06:16:39	-- accel/accel.sh@18 -- # out='Preparing input file...
00:06:22.868  
00:06:22.868  SPDK Configuration:
00:06:22.868  Core mask:      0x1
00:06:22.868  
00:06:22.868  Accel Perf Configuration:
00:06:22.868  Workload Type:  compress
00:06:22.868  Transfer size:  4096 bytes
00:06:22.868  Vector count    1
00:06:22.868  Module:         software
00:06:22.868  File Name:      /home/vagrant/spdk_repo/spdk/test/accel/bib
00:06:22.868  Queue depth:    32
00:06:22.868  Allocate depth: 32
00:06:22.868  # threads/core: 1
00:06:22.868  Run time:       1 seconds
00:06:22.868  Verify:         No
00:06:22.868  
00:06:22.868  Running for 1 seconds...
00:06:22.868  
00:06:22.868  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:06:22.868  ------------------------------------------------------------------------------------
00:06:22.868  0,0                       59136/s        246 MiB/s                0                0
00:06:22.868  ====================================================================================
00:06:22.868  Total                     59136/s        231 MiB/s                0                0'
00:06:22.868   06:16:39	-- accel/accel.sh@20 -- # IFS=:
00:06:22.868   06:16:39	-- accel/accel.sh@20 -- # read -r var val
00:06:22.868    06:16:39	-- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib
00:06:22.868    06:16:39	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib
00:06:22.868     06:16:39	-- accel/accel.sh@12 -- # build_accel_config
00:06:22.868     06:16:39	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:22.868     06:16:39	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:22.868     06:16:39	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:22.868     06:16:39	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:22.868     06:16:39	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:22.868     06:16:39	-- accel/accel.sh@41 -- # local IFS=,
00:06:22.868     06:16:39	-- accel/accel.sh@42 -- # jq -r .
00:06:22.868  [2024-12-16 06:16:39.480387] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:22.868  [2024-12-16 06:16:39.480661] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59099 ]
00:06:22.868  [2024-12-16 06:16:39.614574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:22.868  [2024-12-16 06:16:39.676405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:22.868   06:16:39	-- accel/accel.sh@21 -- # val=
00:06:22.868   06:16:39	-- accel/accel.sh@22 -- # case "$var" in
00:06:22.868   06:16:39	-- accel/accel.sh@20 -- # IFS=:
00:06:22.868   06:16:39	-- accel/accel.sh@20 -- # read -r var val
00:06:22.868   06:16:39	-- accel/accel.sh@21 -- # val=
00:06:22.868   06:16:39	-- accel/accel.sh@22 -- # case "$var" in
00:06:22.868   06:16:39	-- accel/accel.sh@20 -- # IFS=:
00:06:22.868   06:16:39	-- accel/accel.sh@20 -- # read -r var val
00:06:22.868   06:16:39	-- accel/accel.sh@21 -- # val=
00:06:22.868   06:16:39	-- accel/accel.sh@22 -- # case "$var" in
00:06:22.868   06:16:39	-- accel/accel.sh@20 -- # IFS=:
00:06:22.868   06:16:39	-- accel/accel.sh@20 -- # read -r var val
00:06:22.868   06:16:39	-- accel/accel.sh@21 -- # val=0x1
00:06:22.869   06:16:39	-- accel/accel.sh@22 -- # case "$var" in
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # IFS=:
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # read -r var val
00:06:22.869   06:16:39	-- accel/accel.sh@21 -- # val=
00:06:22.869   06:16:39	-- accel/accel.sh@22 -- # case "$var" in
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # IFS=:
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # read -r var val
00:06:22.869   06:16:39	-- accel/accel.sh@21 -- # val=
00:06:22.869   06:16:39	-- accel/accel.sh@22 -- # case "$var" in
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # IFS=:
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # read -r var val
00:06:22.869   06:16:39	-- accel/accel.sh@21 -- # val=compress
00:06:22.869   06:16:39	-- accel/accel.sh@22 -- # case "$var" in
00:06:22.869   06:16:39	-- accel/accel.sh@24 -- # accel_opc=compress
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # IFS=:
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # read -r var val
00:06:22.869   06:16:39	-- accel/accel.sh@21 -- # val='4096 bytes'
00:06:22.869   06:16:39	-- accel/accel.sh@22 -- # case "$var" in
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # IFS=:
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # read -r var val
00:06:22.869   06:16:39	-- accel/accel.sh@21 -- # val=
00:06:22.869   06:16:39	-- accel/accel.sh@22 -- # case "$var" in
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # IFS=:
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # read -r var val
00:06:22.869   06:16:39	-- accel/accel.sh@21 -- # val=software
00:06:22.869   06:16:39	-- accel/accel.sh@22 -- # case "$var" in
00:06:22.869   06:16:39	-- accel/accel.sh@23 -- # accel_module=software
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # IFS=:
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # read -r var val
00:06:22.869   06:16:39	-- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib
00:06:22.869   06:16:39	-- accel/accel.sh@22 -- # case "$var" in
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # IFS=:
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # read -r var val
00:06:22.869   06:16:39	-- accel/accel.sh@21 -- # val=32
00:06:22.869   06:16:39	-- accel/accel.sh@22 -- # case "$var" in
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # IFS=:
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # read -r var val
00:06:22.869   06:16:39	-- accel/accel.sh@21 -- # val=32
00:06:22.869   06:16:39	-- accel/accel.sh@22 -- # case "$var" in
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # IFS=:
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # read -r var val
00:06:22.869   06:16:39	-- accel/accel.sh@21 -- # val=1
00:06:22.869   06:16:39	-- accel/accel.sh@22 -- # case "$var" in
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # IFS=:
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # read -r var val
00:06:22.869   06:16:39	-- accel/accel.sh@21 -- # val='1 seconds'
00:06:22.869   06:16:39	-- accel/accel.sh@22 -- # case "$var" in
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # IFS=:
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # read -r var val
00:06:22.869   06:16:39	-- accel/accel.sh@21 -- # val=No
00:06:22.869   06:16:39	-- accel/accel.sh@22 -- # case "$var" in
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # IFS=:
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # read -r var val
00:06:22.869   06:16:39	-- accel/accel.sh@21 -- # val=
00:06:22.869   06:16:39	-- accel/accel.sh@22 -- # case "$var" in
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # IFS=:
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # read -r var val
00:06:22.869   06:16:39	-- accel/accel.sh@21 -- # val=
00:06:22.869   06:16:39	-- accel/accel.sh@22 -- # case "$var" in
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # IFS=:
00:06:22.869   06:16:39	-- accel/accel.sh@20 -- # read -r var val
00:06:24.246   06:16:40	-- accel/accel.sh@21 -- # val=
00:06:24.246   06:16:40	-- accel/accel.sh@22 -- # case "$var" in
00:06:24.246   06:16:40	-- accel/accel.sh@20 -- # IFS=:
00:06:24.246   06:16:40	-- accel/accel.sh@20 -- # read -r var val
00:06:24.246   06:16:40	-- accel/accel.sh@21 -- # val=
00:06:24.246   06:16:40	-- accel/accel.sh@22 -- # case "$var" in
00:06:24.246   06:16:40	-- accel/accel.sh@20 -- # IFS=:
00:06:24.246   06:16:40	-- accel/accel.sh@20 -- # read -r var val
00:06:24.246   06:16:40	-- accel/accel.sh@21 -- # val=
00:06:24.246   06:16:40	-- accel/accel.sh@22 -- # case "$var" in
00:06:24.246   06:16:40	-- accel/accel.sh@20 -- # IFS=:
00:06:24.246   06:16:40	-- accel/accel.sh@20 -- # read -r var val
00:06:24.246   06:16:40	-- accel/accel.sh@21 -- # val=
00:06:24.246   06:16:40	-- accel/accel.sh@22 -- # case "$var" in
00:06:24.246   06:16:40	-- accel/accel.sh@20 -- # IFS=:
00:06:24.246   06:16:40	-- accel/accel.sh@20 -- # read -r var val
00:06:24.246   06:16:40	-- accel/accel.sh@21 -- # val=
00:06:24.246   06:16:40	-- accel/accel.sh@22 -- # case "$var" in
00:06:24.246   06:16:40	-- accel/accel.sh@20 -- # IFS=:
00:06:24.246   06:16:40	-- accel/accel.sh@20 -- # read -r var val
00:06:24.246   06:16:40	-- accel/accel.sh@21 -- # val=
00:06:24.246   06:16:40	-- accel/accel.sh@22 -- # case "$var" in
00:06:24.246   06:16:40	-- accel/accel.sh@20 -- # IFS=:
00:06:24.246   06:16:40	-- accel/accel.sh@20 -- # read -r var val
00:06:24.246   06:16:40	-- accel/accel.sh@28 -- # [[ -n software ]]
00:06:24.246   06:16:40	-- accel/accel.sh@28 -- # [[ -n compress ]]
00:06:24.246   06:16:40	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:06:24.246  
00:06:24.246  real	0m2.871s
00:06:24.246  user	0m2.470s
00:06:24.246  sys	0m0.201s
00:06:24.246   06:16:40	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:24.246  ************************************
00:06:24.246  END TEST accel_comp
00:06:24.246  ************************************
00:06:24.246   06:16:40	-- common/autotest_common.sh@10 -- # set +x
00:06:24.246   06:16:40	-- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:06:24.246   06:16:40	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:06:24.246   06:16:40	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:24.246   06:16:40	-- common/autotest_common.sh@10 -- # set +x
00:06:24.246  ************************************
00:06:24.246  START TEST accel_decomp
00:06:24.246  ************************************
00:06:24.246   06:16:40	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:06:24.246   06:16:40	-- accel/accel.sh@16 -- # local accel_opc
00:06:24.246   06:16:40	-- accel/accel.sh@17 -- # local accel_module
00:06:24.246    06:16:40	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:06:24.246    06:16:40	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:06:24.246     06:16:40	-- accel/accel.sh@12 -- # build_accel_config
00:06:24.246     06:16:40	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:24.246     06:16:40	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:24.246     06:16:40	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:24.246     06:16:40	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:24.246     06:16:40	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:24.246     06:16:40	-- accel/accel.sh@41 -- # local IFS=,
00:06:24.246     06:16:40	-- accel/accel.sh@42 -- # jq -r .
00:06:24.246  [2024-12-16 06:16:40.963451] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:24.246  [2024-12-16 06:16:40.963635] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59128 ]
00:06:24.246  [2024-12-16 06:16:41.091681] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:24.246  [2024-12-16 06:16:41.166687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:25.624   06:16:42	-- accel/accel.sh@18 -- # out='Preparing input file...
00:06:25.624  
00:06:25.624  SPDK Configuration:
00:06:25.624  Core mask:      0x1
00:06:25.624  
00:06:25.624  Accel Perf Configuration:
00:06:25.624  Workload Type:  decompress
00:06:25.624  Transfer size:  4096 bytes
00:06:25.624  Vector count    1
00:06:25.624  Module:         software
00:06:25.624  File Name:      /home/vagrant/spdk_repo/spdk/test/accel/bib
00:06:25.624  Queue depth:    32
00:06:25.624  Allocate depth: 32
00:06:25.624  # threads/core: 1
00:06:25.624  Run time:       1 seconds
00:06:25.624  Verify:         Yes
00:06:25.624  
00:06:25.624  Running for 1 seconds...
00:06:25.624  
00:06:25.624  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:06:25.624  ------------------------------------------------------------------------------------
00:06:25.624  0,0                       83872/s        154 MiB/s                0                0
00:06:25.624  ====================================================================================
00:06:25.624  Total                     83872/s        327 MiB/s                0                0'
00:06:25.624   06:16:42	-- accel/accel.sh@20 -- # IFS=:
00:06:25.624   06:16:42	-- accel/accel.sh@20 -- # read -r var val
00:06:25.624    06:16:42	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:06:25.624    06:16:42	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:06:25.624     06:16:42	-- accel/accel.sh@12 -- # build_accel_config
00:06:25.624     06:16:42	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:25.624     06:16:42	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:25.624     06:16:42	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:25.624     06:16:42	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:25.624     06:16:42	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:25.624     06:16:42	-- accel/accel.sh@41 -- # local IFS=,
00:06:25.624     06:16:42	-- accel/accel.sh@42 -- # jq -r .
00:06:25.624  [2024-12-16 06:16:42.401235] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:25.624  [2024-12-16 06:16:42.401334] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59153 ]
00:06:25.624  [2024-12-16 06:16:42.533165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:25.624  [2024-12-16 06:16:42.594986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:25.883   06:16:42	-- accel/accel.sh@21 -- # val=
00:06:25.883   06:16:42	-- accel/accel.sh@22 -- # case "$var" in
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # IFS=:
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # read -r var val
00:06:25.883   06:16:42	-- accel/accel.sh@21 -- # val=
00:06:25.883   06:16:42	-- accel/accel.sh@22 -- # case "$var" in
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # IFS=:
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # read -r var val
00:06:25.883   06:16:42	-- accel/accel.sh@21 -- # val=
00:06:25.883   06:16:42	-- accel/accel.sh@22 -- # case "$var" in
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # IFS=:
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # read -r var val
00:06:25.883   06:16:42	-- accel/accel.sh@21 -- # val=0x1
00:06:25.883   06:16:42	-- accel/accel.sh@22 -- # case "$var" in
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # IFS=:
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # read -r var val
00:06:25.883   06:16:42	-- accel/accel.sh@21 -- # val=
00:06:25.883   06:16:42	-- accel/accel.sh@22 -- # case "$var" in
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # IFS=:
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # read -r var val
00:06:25.883   06:16:42	-- accel/accel.sh@21 -- # val=
00:06:25.883   06:16:42	-- accel/accel.sh@22 -- # case "$var" in
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # IFS=:
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # read -r var val
00:06:25.883   06:16:42	-- accel/accel.sh@21 -- # val=decompress
00:06:25.883   06:16:42	-- accel/accel.sh@22 -- # case "$var" in
00:06:25.883   06:16:42	-- accel/accel.sh@24 -- # accel_opc=decompress
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # IFS=:
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # read -r var val
00:06:25.883   06:16:42	-- accel/accel.sh@21 -- # val='4096 bytes'
00:06:25.883   06:16:42	-- accel/accel.sh@22 -- # case "$var" in
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # IFS=:
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # read -r var val
00:06:25.883   06:16:42	-- accel/accel.sh@21 -- # val=
00:06:25.883   06:16:42	-- accel/accel.sh@22 -- # case "$var" in
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # IFS=:
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # read -r var val
00:06:25.883   06:16:42	-- accel/accel.sh@21 -- # val=software
00:06:25.883   06:16:42	-- accel/accel.sh@22 -- # case "$var" in
00:06:25.883   06:16:42	-- accel/accel.sh@23 -- # accel_module=software
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # IFS=:
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # read -r var val
00:06:25.883   06:16:42	-- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib
00:06:25.883   06:16:42	-- accel/accel.sh@22 -- # case "$var" in
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # IFS=:
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # read -r var val
00:06:25.883   06:16:42	-- accel/accel.sh@21 -- # val=32
00:06:25.883   06:16:42	-- accel/accel.sh@22 -- # case "$var" in
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # IFS=:
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # read -r var val
00:06:25.883   06:16:42	-- accel/accel.sh@21 -- # val=32
00:06:25.883   06:16:42	-- accel/accel.sh@22 -- # case "$var" in
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # IFS=:
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # read -r var val
00:06:25.883   06:16:42	-- accel/accel.sh@21 -- # val=1
00:06:25.883   06:16:42	-- accel/accel.sh@22 -- # case "$var" in
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # IFS=:
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # read -r var val
00:06:25.883   06:16:42	-- accel/accel.sh@21 -- # val='1 seconds'
00:06:25.883   06:16:42	-- accel/accel.sh@22 -- # case "$var" in
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # IFS=:
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # read -r var val
00:06:25.883   06:16:42	-- accel/accel.sh@21 -- # val=Yes
00:06:25.883   06:16:42	-- accel/accel.sh@22 -- # case "$var" in
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # IFS=:
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # read -r var val
00:06:25.883   06:16:42	-- accel/accel.sh@21 -- # val=
00:06:25.883   06:16:42	-- accel/accel.sh@22 -- # case "$var" in
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # IFS=:
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # read -r var val
00:06:25.883   06:16:42	-- accel/accel.sh@21 -- # val=
00:06:25.883   06:16:42	-- accel/accel.sh@22 -- # case "$var" in
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # IFS=:
00:06:25.883   06:16:42	-- accel/accel.sh@20 -- # read -r var val
00:06:27.259   06:16:43	-- accel/accel.sh@21 -- # val=
00:06:27.259   06:16:43	-- accel/accel.sh@22 -- # case "$var" in
00:06:27.259   06:16:43	-- accel/accel.sh@20 -- # IFS=:
00:06:27.259   06:16:43	-- accel/accel.sh@20 -- # read -r var val
00:06:27.259   06:16:43	-- accel/accel.sh@21 -- # val=
00:06:27.259   06:16:43	-- accel/accel.sh@22 -- # case "$var" in
00:06:27.259   06:16:43	-- accel/accel.sh@20 -- # IFS=:
00:06:27.259   06:16:43	-- accel/accel.sh@20 -- # read -r var val
00:06:27.259   06:16:43	-- accel/accel.sh@21 -- # val=
00:06:27.259   06:16:43	-- accel/accel.sh@22 -- # case "$var" in
00:06:27.259   06:16:43	-- accel/accel.sh@20 -- # IFS=:
00:06:27.259   06:16:43	-- accel/accel.sh@20 -- # read -r var val
00:06:27.259   06:16:43	-- accel/accel.sh@21 -- # val=
00:06:27.260   06:16:43	-- accel/accel.sh@22 -- # case "$var" in
00:06:27.260   06:16:43	-- accel/accel.sh@20 -- # IFS=:
00:06:27.260   06:16:43	-- accel/accel.sh@20 -- # read -r var val
00:06:27.260   06:16:43	-- accel/accel.sh@21 -- # val=
00:06:27.260   06:16:43	-- accel/accel.sh@22 -- # case "$var" in
00:06:27.260   06:16:43	-- accel/accel.sh@20 -- # IFS=:
00:06:27.260   06:16:43	-- accel/accel.sh@20 -- # read -r var val
00:06:27.260   06:16:43	-- accel/accel.sh@21 -- # val=
00:06:27.260   06:16:43	-- accel/accel.sh@22 -- # case "$var" in
00:06:27.260   06:16:43	-- accel/accel.sh@20 -- # IFS=:
00:06:27.260   06:16:43	-- accel/accel.sh@20 -- # read -r var val
00:06:27.260   06:16:43	-- accel/accel.sh@28 -- # [[ -n software ]]
00:06:27.260   06:16:43	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:06:27.260   06:16:43	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:06:27.260  
00:06:27.260  real	0m2.868s
00:06:27.260  user	0m2.456s
00:06:27.260  sys	0m0.213s
00:06:27.260   06:16:43	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:27.260  ************************************
00:06:27.260  END TEST accel_decomp
00:06:27.260  ************************************
00:06:27.260   06:16:43	-- common/autotest_common.sh@10 -- # set +x
00:06:27.260   06:16:43	-- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0
00:06:27.260   06:16:43	-- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']'
00:06:27.260   06:16:43	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:27.260   06:16:43	-- common/autotest_common.sh@10 -- # set +x
00:06:27.260  ************************************
00:06:27.260  START TEST accel_decmop_full
00:06:27.260  ************************************
00:06:27.260   06:16:43	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0
00:06:27.260   06:16:43	-- accel/accel.sh@16 -- # local accel_opc
00:06:27.260   06:16:43	-- accel/accel.sh@17 -- # local accel_module
00:06:27.260    06:16:43	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0
00:06:27.260    06:16:43	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0
00:06:27.260     06:16:43	-- accel/accel.sh@12 -- # build_accel_config
00:06:27.260     06:16:43	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:27.260     06:16:43	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:27.260     06:16:43	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:27.260     06:16:43	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:27.260     06:16:43	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:27.260     06:16:43	-- accel/accel.sh@41 -- # local IFS=,
00:06:27.260     06:16:43	-- accel/accel.sh@42 -- # jq -r .
00:06:27.260  [2024-12-16 06:16:43.881107] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:27.260  [2024-12-16 06:16:43.881188] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59182 ]
00:06:27.260  [2024-12-16 06:16:44.010881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:27.260  [2024-12-16 06:16:44.073569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:28.636   06:16:45	-- accel/accel.sh@18 -- # out='Preparing input file...
00:06:28.636  
00:06:28.636  SPDK Configuration:
00:06:28.636  Core mask:      0x1
00:06:28.636  
00:06:28.636  Accel Perf Configuration:
00:06:28.636  Workload Type:  decompress
00:06:28.636  Transfer size:  111250 bytes
00:06:28.636  Vector count    1
00:06:28.636  Module:         software
00:06:28.636  File Name:      /home/vagrant/spdk_repo/spdk/test/accel/bib
00:06:28.636  Queue depth:    32
00:06:28.636  Allocate depth: 32
00:06:28.636  # threads/core: 1
00:06:28.636  Run time:       1 seconds
00:06:28.636  Verify:         Yes
00:06:28.636  
00:06:28.636  Running for 1 seconds...
00:06:28.636  
00:06:28.636  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:06:28.636  ------------------------------------------------------------------------------------
00:06:28.636  0,0                        5600/s        231 MiB/s                0                0
00:06:28.636  ====================================================================================
00:06:28.636  Total                      5600/s        594 MiB/s                0                0'
00:06:28.636    06:16:45	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # IFS=:
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # read -r var val
00:06:28.636    06:16:45	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0
00:06:28.636     06:16:45	-- accel/accel.sh@12 -- # build_accel_config
00:06:28.636     06:16:45	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:28.636     06:16:45	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:28.636     06:16:45	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:28.636     06:16:45	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:28.636     06:16:45	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:28.636     06:16:45	-- accel/accel.sh@41 -- # local IFS=,
00:06:28.636     06:16:45	-- accel/accel.sh@42 -- # jq -r .
00:06:28.636  [2024-12-16 06:16:45.319890] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:28.636  [2024-12-16 06:16:45.320128] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59207 ]
00:06:28.636  [2024-12-16 06:16:45.442537] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:28.636  [2024-12-16 06:16:45.505049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:28.636   06:16:45	-- accel/accel.sh@21 -- # val=
00:06:28.636   06:16:45	-- accel/accel.sh@22 -- # case "$var" in
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # IFS=:
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # read -r var val
00:06:28.636   06:16:45	-- accel/accel.sh@21 -- # val=
00:06:28.636   06:16:45	-- accel/accel.sh@22 -- # case "$var" in
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # IFS=:
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # read -r var val
00:06:28.636   06:16:45	-- accel/accel.sh@21 -- # val=
00:06:28.636   06:16:45	-- accel/accel.sh@22 -- # case "$var" in
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # IFS=:
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # read -r var val
00:06:28.636   06:16:45	-- accel/accel.sh@21 -- # val=0x1
00:06:28.636   06:16:45	-- accel/accel.sh@22 -- # case "$var" in
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # IFS=:
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # read -r var val
00:06:28.636   06:16:45	-- accel/accel.sh@21 -- # val=
00:06:28.636   06:16:45	-- accel/accel.sh@22 -- # case "$var" in
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # IFS=:
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # read -r var val
00:06:28.636   06:16:45	-- accel/accel.sh@21 -- # val=
00:06:28.636   06:16:45	-- accel/accel.sh@22 -- # case "$var" in
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # IFS=:
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # read -r var val
00:06:28.636   06:16:45	-- accel/accel.sh@21 -- # val=decompress
00:06:28.636   06:16:45	-- accel/accel.sh@22 -- # case "$var" in
00:06:28.636   06:16:45	-- accel/accel.sh@24 -- # accel_opc=decompress
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # IFS=:
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # read -r var val
00:06:28.636   06:16:45	-- accel/accel.sh@21 -- # val='111250 bytes'
00:06:28.636   06:16:45	-- accel/accel.sh@22 -- # case "$var" in
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # IFS=:
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # read -r var val
00:06:28.636   06:16:45	-- accel/accel.sh@21 -- # val=
00:06:28.636   06:16:45	-- accel/accel.sh@22 -- # case "$var" in
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # IFS=:
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # read -r var val
00:06:28.636   06:16:45	-- accel/accel.sh@21 -- # val=software
00:06:28.636   06:16:45	-- accel/accel.sh@22 -- # case "$var" in
00:06:28.636   06:16:45	-- accel/accel.sh@23 -- # accel_module=software
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # IFS=:
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # read -r var val
00:06:28.636   06:16:45	-- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib
00:06:28.636   06:16:45	-- accel/accel.sh@22 -- # case "$var" in
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # IFS=:
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # read -r var val
00:06:28.636   06:16:45	-- accel/accel.sh@21 -- # val=32
00:06:28.636   06:16:45	-- accel/accel.sh@22 -- # case "$var" in
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # IFS=:
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # read -r var val
00:06:28.636   06:16:45	-- accel/accel.sh@21 -- # val=32
00:06:28.636   06:16:45	-- accel/accel.sh@22 -- # case "$var" in
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # IFS=:
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # read -r var val
00:06:28.636   06:16:45	-- accel/accel.sh@21 -- # val=1
00:06:28.636   06:16:45	-- accel/accel.sh@22 -- # case "$var" in
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # IFS=:
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # read -r var val
00:06:28.636   06:16:45	-- accel/accel.sh@21 -- # val='1 seconds'
00:06:28.636   06:16:45	-- accel/accel.sh@22 -- # case "$var" in
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # IFS=:
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # read -r var val
00:06:28.636   06:16:45	-- accel/accel.sh@21 -- # val=Yes
00:06:28.636   06:16:45	-- accel/accel.sh@22 -- # case "$var" in
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # IFS=:
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # read -r var val
00:06:28.636   06:16:45	-- accel/accel.sh@21 -- # val=
00:06:28.636   06:16:45	-- accel/accel.sh@22 -- # case "$var" in
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # IFS=:
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # read -r var val
00:06:28.636   06:16:45	-- accel/accel.sh@21 -- # val=
00:06:28.636   06:16:45	-- accel/accel.sh@22 -- # case "$var" in
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # IFS=:
00:06:28.636   06:16:45	-- accel/accel.sh@20 -- # read -r var val
00:06:30.014   06:16:46	-- accel/accel.sh@21 -- # val=
00:06:30.014   06:16:46	-- accel/accel.sh@22 -- # case "$var" in
00:06:30.014   06:16:46	-- accel/accel.sh@20 -- # IFS=:
00:06:30.014   06:16:46	-- accel/accel.sh@20 -- # read -r var val
00:06:30.014   06:16:46	-- accel/accel.sh@21 -- # val=
00:06:30.014   06:16:46	-- accel/accel.sh@22 -- # case "$var" in
00:06:30.014   06:16:46	-- accel/accel.sh@20 -- # IFS=:
00:06:30.014   06:16:46	-- accel/accel.sh@20 -- # read -r var val
00:06:30.014   06:16:46	-- accel/accel.sh@21 -- # val=
00:06:30.014   06:16:46	-- accel/accel.sh@22 -- # case "$var" in
00:06:30.014   06:16:46	-- accel/accel.sh@20 -- # IFS=:
00:06:30.014   06:16:46	-- accel/accel.sh@20 -- # read -r var val
00:06:30.014   06:16:46	-- accel/accel.sh@21 -- # val=
00:06:30.014   06:16:46	-- accel/accel.sh@22 -- # case "$var" in
00:06:30.014   06:16:46	-- accel/accel.sh@20 -- # IFS=:
00:06:30.014   06:16:46	-- accel/accel.sh@20 -- # read -r var val
00:06:30.014   06:16:46	-- accel/accel.sh@21 -- # val=
00:06:30.014   06:16:46	-- accel/accel.sh@22 -- # case "$var" in
00:06:30.014   06:16:46	-- accel/accel.sh@20 -- # IFS=:
00:06:30.014   06:16:46	-- accel/accel.sh@20 -- # read -r var val
00:06:30.014   06:16:46	-- accel/accel.sh@21 -- # val=
00:06:30.014   06:16:46	-- accel/accel.sh@22 -- # case "$var" in
00:06:30.014   06:16:46	-- accel/accel.sh@20 -- # IFS=:
00:06:30.014   06:16:46	-- accel/accel.sh@20 -- # read -r var val
00:06:30.014   06:16:46	-- accel/accel.sh@28 -- # [[ -n software ]]
00:06:30.014   06:16:46	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:06:30.014  ************************************
00:06:30.014  END TEST accel_decmop_full
00:06:30.014  ************************************
00:06:30.014   06:16:46	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:06:30.014  
00:06:30.014  real	0m2.866s
00:06:30.014  user	0m2.485s
00:06:30.014  sys	0m0.181s
00:06:30.014   06:16:46	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:30.014   06:16:46	-- common/autotest_common.sh@10 -- # set +x
00:06:30.014   06:16:46	-- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf
00:06:30.014   06:16:46	-- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']'
00:06:30.014   06:16:46	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:30.014   06:16:46	-- common/autotest_common.sh@10 -- # set +x
00:06:30.014  ************************************
00:06:30.014  START TEST accel_decomp_mcore
00:06:30.014  ************************************
00:06:30.014   06:16:46	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf
00:06:30.014   06:16:46	-- accel/accel.sh@16 -- # local accel_opc
00:06:30.014   06:16:46	-- accel/accel.sh@17 -- # local accel_module
00:06:30.014    06:16:46	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf
00:06:30.014    06:16:46	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf
00:06:30.014     06:16:46	-- accel/accel.sh@12 -- # build_accel_config
00:06:30.014     06:16:46	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:30.014     06:16:46	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:30.014     06:16:46	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:30.014     06:16:46	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:30.014     06:16:46	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:30.014     06:16:46	-- accel/accel.sh@41 -- # local IFS=,
00:06:30.014     06:16:46	-- accel/accel.sh@42 -- # jq -r .
00:06:30.014  [2024-12-16 06:16:46.802663] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:30.014  [2024-12-16 06:16:46.802896] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59236 ]
00:06:30.014  [2024-12-16 06:16:46.939273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:06:30.273  [2024-12-16 06:16:47.007185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:06:30.273  [2024-12-16 06:16:47.007325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:06:30.273  [2024-12-16 06:16:47.007480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:06:30.273  [2024-12-16 06:16:47.007482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:31.648   06:16:48	-- accel/accel.sh@18 -- # out='Preparing input file...
00:06:31.648  
00:06:31.648  SPDK Configuration:
00:06:31.648  Core mask:      0xf
00:06:31.648  
00:06:31.649  Accel Perf Configuration:
00:06:31.649  Workload Type:  decompress
00:06:31.649  Transfer size:  4096 bytes
00:06:31.649  Vector count    1
00:06:31.649  Module:         software
00:06:31.649  File Name:      /home/vagrant/spdk_repo/spdk/test/accel/bib
00:06:31.649  Queue depth:    32
00:06:31.649  Allocate depth: 32
00:06:31.649  # threads/core: 1
00:06:31.649  Run time:       1 seconds
00:06:31.649  Verify:         Yes
00:06:31.649  
00:06:31.649  Running for 1 seconds...
00:06:31.649  
00:06:31.649  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:06:31.649  ------------------------------------------------------------------------------------
00:06:31.649  0,0                       66560/s        122 MiB/s                0                0
00:06:31.649  3,0                       63392/s        116 MiB/s                0                0
00:06:31.649  2,0                       63808/s        117 MiB/s                0                0
00:06:31.649  1,0                       64992/s        119 MiB/s                0                0
00:06:31.649  ====================================================================================
00:06:31.649  Total                    258752/s       1010 MiB/s                0                0'
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # IFS=:
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # read -r var val
00:06:31.649    06:16:48	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf
00:06:31.649    06:16:48	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf
00:06:31.649     06:16:48	-- accel/accel.sh@12 -- # build_accel_config
00:06:31.649     06:16:48	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:31.649     06:16:48	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:31.649     06:16:48	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:31.649     06:16:48	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:31.649     06:16:48	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:31.649     06:16:48	-- accel/accel.sh@41 -- # local IFS=,
00:06:31.649     06:16:48	-- accel/accel.sh@42 -- # jq -r .
00:06:31.649  [2024-12-16 06:16:48.256991] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:31.649  [2024-12-16 06:16:48.257256] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59253 ]
00:06:31.649  [2024-12-16 06:16:48.395288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:06:31.649  [2024-12-16 06:16:48.474589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:06:31.649  [2024-12-16 06:16:48.474690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:06:31.649  [2024-12-16 06:16:48.474820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:06:31.649  [2024-12-16 06:16:48.474931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:31.649   06:16:48	-- accel/accel.sh@21 -- # val=
00:06:31.649   06:16:48	-- accel/accel.sh@22 -- # case "$var" in
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # IFS=:
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # read -r var val
00:06:31.649   06:16:48	-- accel/accel.sh@21 -- # val=
00:06:31.649   06:16:48	-- accel/accel.sh@22 -- # case "$var" in
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # IFS=:
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # read -r var val
00:06:31.649   06:16:48	-- accel/accel.sh@21 -- # val=
00:06:31.649   06:16:48	-- accel/accel.sh@22 -- # case "$var" in
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # IFS=:
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # read -r var val
00:06:31.649   06:16:48	-- accel/accel.sh@21 -- # val=0xf
00:06:31.649   06:16:48	-- accel/accel.sh@22 -- # case "$var" in
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # IFS=:
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # read -r var val
00:06:31.649   06:16:48	-- accel/accel.sh@21 -- # val=
00:06:31.649   06:16:48	-- accel/accel.sh@22 -- # case "$var" in
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # IFS=:
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # read -r var val
00:06:31.649   06:16:48	-- accel/accel.sh@21 -- # val=
00:06:31.649   06:16:48	-- accel/accel.sh@22 -- # case "$var" in
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # IFS=:
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # read -r var val
00:06:31.649   06:16:48	-- accel/accel.sh@21 -- # val=decompress
00:06:31.649   06:16:48	-- accel/accel.sh@22 -- # case "$var" in
00:06:31.649   06:16:48	-- accel/accel.sh@24 -- # accel_opc=decompress
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # IFS=:
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # read -r var val
00:06:31.649   06:16:48	-- accel/accel.sh@21 -- # val='4096 bytes'
00:06:31.649   06:16:48	-- accel/accel.sh@22 -- # case "$var" in
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # IFS=:
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # read -r var val
00:06:31.649   06:16:48	-- accel/accel.sh@21 -- # val=
00:06:31.649   06:16:48	-- accel/accel.sh@22 -- # case "$var" in
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # IFS=:
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # read -r var val
00:06:31.649   06:16:48	-- accel/accel.sh@21 -- # val=software
00:06:31.649   06:16:48	-- accel/accel.sh@22 -- # case "$var" in
00:06:31.649   06:16:48	-- accel/accel.sh@23 -- # accel_module=software
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # IFS=:
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # read -r var val
00:06:31.649   06:16:48	-- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib
00:06:31.649   06:16:48	-- accel/accel.sh@22 -- # case "$var" in
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # IFS=:
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # read -r var val
00:06:31.649   06:16:48	-- accel/accel.sh@21 -- # val=32
00:06:31.649   06:16:48	-- accel/accel.sh@22 -- # case "$var" in
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # IFS=:
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # read -r var val
00:06:31.649   06:16:48	-- accel/accel.sh@21 -- # val=32
00:06:31.649   06:16:48	-- accel/accel.sh@22 -- # case "$var" in
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # IFS=:
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # read -r var val
00:06:31.649   06:16:48	-- accel/accel.sh@21 -- # val=1
00:06:31.649   06:16:48	-- accel/accel.sh@22 -- # case "$var" in
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # IFS=:
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # read -r var val
00:06:31.649   06:16:48	-- accel/accel.sh@21 -- # val='1 seconds'
00:06:31.649   06:16:48	-- accel/accel.sh@22 -- # case "$var" in
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # IFS=:
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # read -r var val
00:06:31.649   06:16:48	-- accel/accel.sh@21 -- # val=Yes
00:06:31.649   06:16:48	-- accel/accel.sh@22 -- # case "$var" in
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # IFS=:
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # read -r var val
00:06:31.649   06:16:48	-- accel/accel.sh@21 -- # val=
00:06:31.649   06:16:48	-- accel/accel.sh@22 -- # case "$var" in
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # IFS=:
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # read -r var val
00:06:31.649   06:16:48	-- accel/accel.sh@21 -- # val=
00:06:31.649   06:16:48	-- accel/accel.sh@22 -- # case "$var" in
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # IFS=:
00:06:31.649   06:16:48	-- accel/accel.sh@20 -- # read -r var val
00:06:33.058   06:16:49	-- accel/accel.sh@21 -- # val=
00:06:33.058   06:16:49	-- accel/accel.sh@22 -- # case "$var" in
00:06:33.058   06:16:49	-- accel/accel.sh@20 -- # IFS=:
00:06:33.058   06:16:49	-- accel/accel.sh@20 -- # read -r var val
00:06:33.058   06:16:49	-- accel/accel.sh@21 -- # val=
00:06:33.058   06:16:49	-- accel/accel.sh@22 -- # case "$var" in
00:06:33.058   06:16:49	-- accel/accel.sh@20 -- # IFS=:
00:06:33.058   06:16:49	-- accel/accel.sh@20 -- # read -r var val
00:06:33.058   06:16:49	-- accel/accel.sh@21 -- # val=
00:06:33.058   06:16:49	-- accel/accel.sh@22 -- # case "$var" in
00:06:33.058   06:16:49	-- accel/accel.sh@20 -- # IFS=:
00:06:33.058   06:16:49	-- accel/accel.sh@20 -- # read -r var val
00:06:33.058   06:16:49	-- accel/accel.sh@21 -- # val=
00:06:33.058   06:16:49	-- accel/accel.sh@22 -- # case "$var" in
00:06:33.058   06:16:49	-- accel/accel.sh@20 -- # IFS=:
00:06:33.058   06:16:49	-- accel/accel.sh@20 -- # read -r var val
00:06:33.058   06:16:49	-- accel/accel.sh@21 -- # val=
00:06:33.058   06:16:49	-- accel/accel.sh@22 -- # case "$var" in
00:06:33.058   06:16:49	-- accel/accel.sh@20 -- # IFS=:
00:06:33.058   06:16:49	-- accel/accel.sh@20 -- # read -r var val
00:06:33.058   06:16:49	-- accel/accel.sh@21 -- # val=
00:06:33.058   06:16:49	-- accel/accel.sh@22 -- # case "$var" in
00:06:33.058   06:16:49	-- accel/accel.sh@20 -- # IFS=:
00:06:33.058   06:16:49	-- accel/accel.sh@20 -- # read -r var val
00:06:33.058   06:16:49	-- accel/accel.sh@21 -- # val=
00:06:33.058   06:16:49	-- accel/accel.sh@22 -- # case "$var" in
00:06:33.058   06:16:49	-- accel/accel.sh@20 -- # IFS=:
00:06:33.058   06:16:49	-- accel/accel.sh@20 -- # read -r var val
00:06:33.058   06:16:49	-- accel/accel.sh@21 -- # val=
00:06:33.058   06:16:49	-- accel/accel.sh@22 -- # case "$var" in
00:06:33.058   06:16:49	-- accel/accel.sh@20 -- # IFS=:
00:06:33.058   06:16:49	-- accel/accel.sh@20 -- # read -r var val
00:06:33.058   06:16:49	-- accel/accel.sh@21 -- # val=
00:06:33.058  ************************************
00:06:33.058  END TEST accel_decomp_mcore
00:06:33.058  ************************************
00:06:33.058   06:16:49	-- accel/accel.sh@22 -- # case "$var" in
00:06:33.058   06:16:49	-- accel/accel.sh@20 -- # IFS=:
00:06:33.058   06:16:49	-- accel/accel.sh@20 -- # read -r var val
00:06:33.058   06:16:49	-- accel/accel.sh@28 -- # [[ -n software ]]
00:06:33.058   06:16:49	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:06:33.058   06:16:49	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:06:33.058  
00:06:33.058  real	0m2.932s
00:06:33.058  user	0m9.257s
00:06:33.058  sys	0m0.236s
00:06:33.058   06:16:49	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:33.058   06:16:49	-- common/autotest_common.sh@10 -- # set +x
00:06:33.059   06:16:49	-- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf
00:06:33.059   06:16:49	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:06:33.059   06:16:49	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:33.059   06:16:49	-- common/autotest_common.sh@10 -- # set +x
00:06:33.059  ************************************
00:06:33.059  START TEST accel_decomp_full_mcore
00:06:33.059  ************************************
00:06:33.059   06:16:49	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf
00:06:33.059   06:16:49	-- accel/accel.sh@16 -- # local accel_opc
00:06:33.059   06:16:49	-- accel/accel.sh@17 -- # local accel_module
00:06:33.059    06:16:49	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf
00:06:33.059    06:16:49	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf
00:06:33.059     06:16:49	-- accel/accel.sh@12 -- # build_accel_config
00:06:33.059     06:16:49	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:33.059     06:16:49	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:33.059     06:16:49	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:33.059     06:16:49	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:33.059     06:16:49	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:33.059     06:16:49	-- accel/accel.sh@41 -- # local IFS=,
00:06:33.059     06:16:49	-- accel/accel.sh@42 -- # jq -r .
00:06:33.059  [2024-12-16 06:16:49.785032] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:33.059  [2024-12-16 06:16:49.785115] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59298 ]
00:06:33.059  [2024-12-16 06:16:49.916610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:06:33.059  [2024-12-16 06:16:49.983799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:06:33.059  [2024-12-16 06:16:49.983902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:06:33.059  [2024-12-16 06:16:49.983993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:06:33.059  [2024-12-16 06:16:49.983996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:34.436   06:16:51	-- accel/accel.sh@18 -- # out='Preparing input file...
00:06:34.436  
00:06:34.436  SPDK Configuration:
00:06:34.436  Core mask:      0xf
00:06:34.436  
00:06:34.436  Accel Perf Configuration:
00:06:34.436  Workload Type:  decompress
00:06:34.436  Transfer size:  111250 bytes
00:06:34.436  Vector count    1
00:06:34.436  Module:         software
00:06:34.436  File Name:      /home/vagrant/spdk_repo/spdk/test/accel/bib
00:06:34.436  Queue depth:    32
00:06:34.436  Allocate depth: 32
00:06:34.436  # threads/core: 1
00:06:34.436  Run time:       1 seconds
00:06:34.436  Verify:         Yes
00:06:34.436  
00:06:34.436  Running for 1 seconds...
00:06:34.436  
00:06:34.436  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:06:34.436  ------------------------------------------------------------------------------------
00:06:34.436  0,0                        5024/s        207 MiB/s                0                0
00:06:34.436  3,0                        5024/s        207 MiB/s                0                0
00:06:34.436  2,0                        5056/s        208 MiB/s                0                0
00:06:34.436  1,0                        5024/s        207 MiB/s                0                0
00:06:34.436  ====================================================================================
00:06:34.436  Total                     20128/s       2135 MiB/s                0                0'
00:06:34.436   06:16:51	-- accel/accel.sh@20 -- # IFS=:
00:06:34.436   06:16:51	-- accel/accel.sh@20 -- # read -r var val
00:06:34.436    06:16:51	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf
00:06:34.436    06:16:51	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf
00:06:34.436     06:16:51	-- accel/accel.sh@12 -- # build_accel_config
00:06:34.436     06:16:51	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:34.436     06:16:51	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:34.436     06:16:51	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:34.436     06:16:51	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:34.436     06:16:51	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:34.436     06:16:51	-- accel/accel.sh@41 -- # local IFS=,
00:06:34.436     06:16:51	-- accel/accel.sh@42 -- # jq -r .
00:06:34.436  [2024-12-16 06:16:51.243952] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:34.436  [2024-12-16 06:16:51.244932] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59315 ]
00:06:34.436  [2024-12-16 06:16:51.383919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:06:34.695  [2024-12-16 06:16:51.450836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:06:34.695  [2024-12-16 06:16:51.450943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:06:34.695  [2024-12-16 06:16:51.451088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:06:34.695  [2024-12-16 06:16:51.451094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:34.695   06:16:51	-- accel/accel.sh@21 -- # val=
00:06:34.695   06:16:51	-- accel/accel.sh@22 -- # case "$var" in
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # IFS=:
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # read -r var val
00:06:34.695   06:16:51	-- accel/accel.sh@21 -- # val=
00:06:34.695   06:16:51	-- accel/accel.sh@22 -- # case "$var" in
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # IFS=:
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # read -r var val
00:06:34.695   06:16:51	-- accel/accel.sh@21 -- # val=
00:06:34.695   06:16:51	-- accel/accel.sh@22 -- # case "$var" in
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # IFS=:
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # read -r var val
00:06:34.695   06:16:51	-- accel/accel.sh@21 -- # val=0xf
00:06:34.695   06:16:51	-- accel/accel.sh@22 -- # case "$var" in
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # IFS=:
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # read -r var val
00:06:34.695   06:16:51	-- accel/accel.sh@21 -- # val=
00:06:34.695   06:16:51	-- accel/accel.sh@22 -- # case "$var" in
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # IFS=:
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # read -r var val
00:06:34.695   06:16:51	-- accel/accel.sh@21 -- # val=
00:06:34.695   06:16:51	-- accel/accel.sh@22 -- # case "$var" in
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # IFS=:
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # read -r var val
00:06:34.695   06:16:51	-- accel/accel.sh@21 -- # val=decompress
00:06:34.695   06:16:51	-- accel/accel.sh@22 -- # case "$var" in
00:06:34.695   06:16:51	-- accel/accel.sh@24 -- # accel_opc=decompress
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # IFS=:
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # read -r var val
00:06:34.695   06:16:51	-- accel/accel.sh@21 -- # val='111250 bytes'
00:06:34.695   06:16:51	-- accel/accel.sh@22 -- # case "$var" in
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # IFS=:
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # read -r var val
00:06:34.695   06:16:51	-- accel/accel.sh@21 -- # val=
00:06:34.695   06:16:51	-- accel/accel.sh@22 -- # case "$var" in
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # IFS=:
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # read -r var val
00:06:34.695   06:16:51	-- accel/accel.sh@21 -- # val=software
00:06:34.695   06:16:51	-- accel/accel.sh@22 -- # case "$var" in
00:06:34.695   06:16:51	-- accel/accel.sh@23 -- # accel_module=software
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # IFS=:
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # read -r var val
00:06:34.695   06:16:51	-- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib
00:06:34.695   06:16:51	-- accel/accel.sh@22 -- # case "$var" in
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # IFS=:
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # read -r var val
00:06:34.695   06:16:51	-- accel/accel.sh@21 -- # val=32
00:06:34.695   06:16:51	-- accel/accel.sh@22 -- # case "$var" in
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # IFS=:
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # read -r var val
00:06:34.695   06:16:51	-- accel/accel.sh@21 -- # val=32
00:06:34.695   06:16:51	-- accel/accel.sh@22 -- # case "$var" in
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # IFS=:
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # read -r var val
00:06:34.695   06:16:51	-- accel/accel.sh@21 -- # val=1
00:06:34.695   06:16:51	-- accel/accel.sh@22 -- # case "$var" in
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # IFS=:
00:06:34.695   06:16:51	-- accel/accel.sh@20 -- # read -r var val
00:06:34.695   06:16:51	-- accel/accel.sh@21 -- # val='1 seconds'
00:06:34.696   06:16:51	-- accel/accel.sh@22 -- # case "$var" in
00:06:34.696   06:16:51	-- accel/accel.sh@20 -- # IFS=:
00:06:34.696   06:16:51	-- accel/accel.sh@20 -- # read -r var val
00:06:34.696   06:16:51	-- accel/accel.sh@21 -- # val=Yes
00:06:34.696   06:16:51	-- accel/accel.sh@22 -- # case "$var" in
00:06:34.696   06:16:51	-- accel/accel.sh@20 -- # IFS=:
00:06:34.696   06:16:51	-- accel/accel.sh@20 -- # read -r var val
00:06:34.696   06:16:51	-- accel/accel.sh@21 -- # val=
00:06:34.696   06:16:51	-- accel/accel.sh@22 -- # case "$var" in
00:06:34.696   06:16:51	-- accel/accel.sh@20 -- # IFS=:
00:06:34.696   06:16:51	-- accel/accel.sh@20 -- # read -r var val
00:06:34.696   06:16:51	-- accel/accel.sh@21 -- # val=
00:06:34.696   06:16:51	-- accel/accel.sh@22 -- # case "$var" in
00:06:34.696   06:16:51	-- accel/accel.sh@20 -- # IFS=:
00:06:34.696   06:16:51	-- accel/accel.sh@20 -- # read -r var val
00:06:36.074   06:16:52	-- accel/accel.sh@21 -- # val=
00:06:36.074   06:16:52	-- accel/accel.sh@22 -- # case "$var" in
00:06:36.074   06:16:52	-- accel/accel.sh@20 -- # IFS=:
00:06:36.074   06:16:52	-- accel/accel.sh@20 -- # read -r var val
00:06:36.074   06:16:52	-- accel/accel.sh@21 -- # val=
00:06:36.074   06:16:52	-- accel/accel.sh@22 -- # case "$var" in
00:06:36.074   06:16:52	-- accel/accel.sh@20 -- # IFS=:
00:06:36.074   06:16:52	-- accel/accel.sh@20 -- # read -r var val
00:06:36.074   06:16:52	-- accel/accel.sh@21 -- # val=
00:06:36.074   06:16:52	-- accel/accel.sh@22 -- # case "$var" in
00:06:36.074   06:16:52	-- accel/accel.sh@20 -- # IFS=:
00:06:36.074   06:16:52	-- accel/accel.sh@20 -- # read -r var val
00:06:36.074   06:16:52	-- accel/accel.sh@21 -- # val=
00:06:36.074   06:16:52	-- accel/accel.sh@22 -- # case "$var" in
00:06:36.074   06:16:52	-- accel/accel.sh@20 -- # IFS=:
00:06:36.074   06:16:52	-- accel/accel.sh@20 -- # read -r var val
00:06:36.074   06:16:52	-- accel/accel.sh@21 -- # val=
00:06:36.074   06:16:52	-- accel/accel.sh@22 -- # case "$var" in
00:06:36.074   06:16:52	-- accel/accel.sh@20 -- # IFS=:
00:06:36.074   06:16:52	-- accel/accel.sh@20 -- # read -r var val
00:06:36.074   06:16:52	-- accel/accel.sh@21 -- # val=
00:06:36.074   06:16:52	-- accel/accel.sh@22 -- # case "$var" in
00:06:36.074   06:16:52	-- accel/accel.sh@20 -- # IFS=:
00:06:36.074   06:16:52	-- accel/accel.sh@20 -- # read -r var val
00:06:36.074   06:16:52	-- accel/accel.sh@21 -- # val=
00:06:36.074   06:16:52	-- accel/accel.sh@22 -- # case "$var" in
00:06:36.074   06:16:52	-- accel/accel.sh@20 -- # IFS=:
00:06:36.074   06:16:52	-- accel/accel.sh@20 -- # read -r var val
00:06:36.074   06:16:52	-- accel/accel.sh@21 -- # val=
00:06:36.074  ************************************
00:06:36.075  END TEST accel_decomp_full_mcore
00:06:36.075  ************************************
00:06:36.075   06:16:52	-- accel/accel.sh@22 -- # case "$var" in
00:06:36.075   06:16:52	-- accel/accel.sh@20 -- # IFS=:
00:06:36.075   06:16:52	-- accel/accel.sh@20 -- # read -r var val
00:06:36.075   06:16:52	-- accel/accel.sh@21 -- # val=
00:06:36.075   06:16:52	-- accel/accel.sh@22 -- # case "$var" in
00:06:36.075   06:16:52	-- accel/accel.sh@20 -- # IFS=:
00:06:36.075   06:16:52	-- accel/accel.sh@20 -- # read -r var val
00:06:36.075   06:16:52	-- accel/accel.sh@28 -- # [[ -n software ]]
00:06:36.075   06:16:52	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:06:36.075   06:16:52	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:06:36.075  
00:06:36.075  real	0m2.939s
00:06:36.075  user	0m9.305s
00:06:36.075  sys	0m0.268s
00:06:36.075   06:16:52	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:36.075   06:16:52	-- common/autotest_common.sh@10 -- # set +x
00:06:36.075   06:16:52	-- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2
00:06:36.075   06:16:52	-- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']'
00:06:36.075   06:16:52	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:36.075   06:16:52	-- common/autotest_common.sh@10 -- # set +x
00:06:36.075  ************************************
00:06:36.075  START TEST accel_decomp_mthread
00:06:36.075  ************************************
00:06:36.075   06:16:52	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2
00:06:36.075   06:16:52	-- accel/accel.sh@16 -- # local accel_opc
00:06:36.075   06:16:52	-- accel/accel.sh@17 -- # local accel_module
00:06:36.075    06:16:52	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2
00:06:36.075    06:16:52	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2
00:06:36.075     06:16:52	-- accel/accel.sh@12 -- # build_accel_config
00:06:36.075     06:16:52	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:36.075     06:16:52	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:36.075     06:16:52	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:36.075     06:16:52	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:36.075     06:16:52	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:36.075     06:16:52	-- accel/accel.sh@41 -- # local IFS=,
00:06:36.075     06:16:52	-- accel/accel.sh@42 -- # jq -r .
00:06:36.075  [2024-12-16 06:16:52.768472] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:36.075  [2024-12-16 06:16:52.768609] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59358 ]
00:06:36.075  [2024-12-16 06:16:52.899407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:36.075  [2024-12-16 06:16:52.962353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:37.452   06:16:54	-- accel/accel.sh@18 -- # out='Preparing input file...
00:06:37.452  
00:06:37.452  SPDK Configuration:
00:06:37.452  Core mask:      0x1
00:06:37.452  
00:06:37.452  Accel Perf Configuration:
00:06:37.452  Workload Type:  decompress
00:06:37.452  Transfer size:  4096 bytes
00:06:37.452  Vector count    1
00:06:37.452  Module:         software
00:06:37.452  File Name:      /home/vagrant/spdk_repo/spdk/test/accel/bib
00:06:37.452  Queue depth:    32
00:06:37.452  Allocate depth: 32
00:06:37.452  # threads/core: 2
00:06:37.452  Run time:       1 seconds
00:06:37.452  Verify:         Yes
00:06:37.452  
00:06:37.452  Running for 1 seconds...
00:06:37.452  
00:06:37.452  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:06:37.452  ------------------------------------------------------------------------------------
00:06:37.452  0,1                       42176/s         77 MiB/s                0                0
00:06:37.452  0,0                       42048/s         77 MiB/s                0                0
00:06:37.452  ====================================================================================
00:06:37.452  Total                     84224/s        329 MiB/s                0                0'
00:06:37.452   06:16:54	-- accel/accel.sh@20 -- # IFS=:
00:06:37.452    06:16:54	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2
00:06:37.452   06:16:54	-- accel/accel.sh@20 -- # read -r var val
00:06:37.452    06:16:54	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2
00:06:37.452     06:16:54	-- accel/accel.sh@12 -- # build_accel_config
00:06:37.452     06:16:54	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:37.452     06:16:54	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:37.452     06:16:54	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:37.452     06:16:54	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:37.452     06:16:54	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:37.452     06:16:54	-- accel/accel.sh@41 -- # local IFS=,
00:06:37.452     06:16:54	-- accel/accel.sh@42 -- # jq -r .
00:06:37.452  [2024-12-16 06:16:54.203467] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:37.452  [2024-12-16 06:16:54.204007] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59372 ]
00:06:37.452  [2024-12-16 06:16:54.340158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:37.452  [2024-12-16 06:16:54.413980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:37.711   06:16:54	-- accel/accel.sh@21 -- # val=
00:06:37.711   06:16:54	-- accel/accel.sh@22 -- # case "$var" in
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # IFS=:
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # read -r var val
00:06:37.711   06:16:54	-- accel/accel.sh@21 -- # val=
00:06:37.711   06:16:54	-- accel/accel.sh@22 -- # case "$var" in
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # IFS=:
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # read -r var val
00:06:37.711   06:16:54	-- accel/accel.sh@21 -- # val=
00:06:37.711   06:16:54	-- accel/accel.sh@22 -- # case "$var" in
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # IFS=:
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # read -r var val
00:06:37.711   06:16:54	-- accel/accel.sh@21 -- # val=0x1
00:06:37.711   06:16:54	-- accel/accel.sh@22 -- # case "$var" in
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # IFS=:
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # read -r var val
00:06:37.711   06:16:54	-- accel/accel.sh@21 -- # val=
00:06:37.711   06:16:54	-- accel/accel.sh@22 -- # case "$var" in
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # IFS=:
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # read -r var val
00:06:37.711   06:16:54	-- accel/accel.sh@21 -- # val=
00:06:37.711   06:16:54	-- accel/accel.sh@22 -- # case "$var" in
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # IFS=:
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # read -r var val
00:06:37.711   06:16:54	-- accel/accel.sh@21 -- # val=decompress
00:06:37.711   06:16:54	-- accel/accel.sh@22 -- # case "$var" in
00:06:37.711   06:16:54	-- accel/accel.sh@24 -- # accel_opc=decompress
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # IFS=:
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # read -r var val
00:06:37.711   06:16:54	-- accel/accel.sh@21 -- # val='4096 bytes'
00:06:37.711   06:16:54	-- accel/accel.sh@22 -- # case "$var" in
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # IFS=:
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # read -r var val
00:06:37.711   06:16:54	-- accel/accel.sh@21 -- # val=
00:06:37.711   06:16:54	-- accel/accel.sh@22 -- # case "$var" in
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # IFS=:
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # read -r var val
00:06:37.711   06:16:54	-- accel/accel.sh@21 -- # val=software
00:06:37.711   06:16:54	-- accel/accel.sh@22 -- # case "$var" in
00:06:37.711   06:16:54	-- accel/accel.sh@23 -- # accel_module=software
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # IFS=:
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # read -r var val
00:06:37.711   06:16:54	-- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib
00:06:37.711   06:16:54	-- accel/accel.sh@22 -- # case "$var" in
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # IFS=:
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # read -r var val
00:06:37.711   06:16:54	-- accel/accel.sh@21 -- # val=32
00:06:37.711   06:16:54	-- accel/accel.sh@22 -- # case "$var" in
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # IFS=:
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # read -r var val
00:06:37.711   06:16:54	-- accel/accel.sh@21 -- # val=32
00:06:37.711   06:16:54	-- accel/accel.sh@22 -- # case "$var" in
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # IFS=:
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # read -r var val
00:06:37.711   06:16:54	-- accel/accel.sh@21 -- # val=2
00:06:37.711   06:16:54	-- accel/accel.sh@22 -- # case "$var" in
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # IFS=:
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # read -r var val
00:06:37.711   06:16:54	-- accel/accel.sh@21 -- # val='1 seconds'
00:06:37.711   06:16:54	-- accel/accel.sh@22 -- # case "$var" in
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # IFS=:
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # read -r var val
00:06:37.711   06:16:54	-- accel/accel.sh@21 -- # val=Yes
00:06:37.711   06:16:54	-- accel/accel.sh@22 -- # case "$var" in
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # IFS=:
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # read -r var val
00:06:37.711   06:16:54	-- accel/accel.sh@21 -- # val=
00:06:37.711   06:16:54	-- accel/accel.sh@22 -- # case "$var" in
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # IFS=:
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # read -r var val
00:06:37.711   06:16:54	-- accel/accel.sh@21 -- # val=
00:06:37.711   06:16:54	-- accel/accel.sh@22 -- # case "$var" in
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # IFS=:
00:06:37.711   06:16:54	-- accel/accel.sh@20 -- # read -r var val
00:06:39.088   06:16:55	-- accel/accel.sh@21 -- # val=
00:06:39.088   06:16:55	-- accel/accel.sh@22 -- # case "$var" in
00:06:39.088   06:16:55	-- accel/accel.sh@20 -- # IFS=:
00:06:39.088   06:16:55	-- accel/accel.sh@20 -- # read -r var val
00:06:39.088   06:16:55	-- accel/accel.sh@21 -- # val=
00:06:39.088   06:16:55	-- accel/accel.sh@22 -- # case "$var" in
00:06:39.088   06:16:55	-- accel/accel.sh@20 -- # IFS=:
00:06:39.088   06:16:55	-- accel/accel.sh@20 -- # read -r var val
00:06:39.088   06:16:55	-- accel/accel.sh@21 -- # val=
00:06:39.088   06:16:55	-- accel/accel.sh@22 -- # case "$var" in
00:06:39.088   06:16:55	-- accel/accel.sh@20 -- # IFS=:
00:06:39.088   06:16:55	-- accel/accel.sh@20 -- # read -r var val
00:06:39.088   06:16:55	-- accel/accel.sh@21 -- # val=
00:06:39.088   06:16:55	-- accel/accel.sh@22 -- # case "$var" in
00:06:39.088   06:16:55	-- accel/accel.sh@20 -- # IFS=:
00:06:39.088   06:16:55	-- accel/accel.sh@20 -- # read -r var val
00:06:39.088   06:16:55	-- accel/accel.sh@21 -- # val=
00:06:39.088   06:16:55	-- accel/accel.sh@22 -- # case "$var" in
00:06:39.088   06:16:55	-- accel/accel.sh@20 -- # IFS=:
00:06:39.088   06:16:55	-- accel/accel.sh@20 -- # read -r var val
00:06:39.088   06:16:55	-- accel/accel.sh@21 -- # val=
00:06:39.088   06:16:55	-- accel/accel.sh@22 -- # case "$var" in
00:06:39.088   06:16:55	-- accel/accel.sh@20 -- # IFS=:
00:06:39.088   06:16:55	-- accel/accel.sh@20 -- # read -r var val
00:06:39.088   06:16:55	-- accel/accel.sh@21 -- # val=
00:06:39.088   06:16:55	-- accel/accel.sh@22 -- # case "$var" in
00:06:39.088   06:16:55	-- accel/accel.sh@20 -- # IFS=:
00:06:39.088   06:16:55	-- accel/accel.sh@20 -- # read -r var val
00:06:39.088   06:16:55	-- accel/accel.sh@28 -- # [[ -n software ]]
00:06:39.088  ************************************
00:06:39.088   06:16:55	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:06:39.088   06:16:55	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:06:39.088  
00:06:39.088  real	0m2.883s
00:06:39.088  user	0m2.475s
00:06:39.088  sys	0m0.208s
00:06:39.088   06:16:55	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:39.088   06:16:55	-- common/autotest_common.sh@10 -- # set +x
00:06:39.088  END TEST accel_decomp_mthread
00:06:39.088  ************************************
00:06:39.088   06:16:55	-- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2
00:06:39.088   06:16:55	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:06:39.088   06:16:55	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:39.088   06:16:55	-- common/autotest_common.sh@10 -- # set +x
00:06:39.088  ************************************
00:06:39.088  START TEST accel_deomp_full_mthread
00:06:39.088  ************************************
00:06:39.088   06:16:55	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2
00:06:39.088   06:16:55	-- accel/accel.sh@16 -- # local accel_opc
00:06:39.088   06:16:55	-- accel/accel.sh@17 -- # local accel_module
00:06:39.088    06:16:55	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2
00:06:39.088    06:16:55	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2
00:06:39.088     06:16:55	-- accel/accel.sh@12 -- # build_accel_config
00:06:39.088     06:16:55	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:39.088     06:16:55	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:39.088     06:16:55	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:39.088     06:16:55	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:39.088     06:16:55	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:39.088     06:16:55	-- accel/accel.sh@41 -- # local IFS=,
00:06:39.088     06:16:55	-- accel/accel.sh@42 -- # jq -r .
00:06:39.088  [2024-12-16 06:16:55.710404] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:39.088  [2024-12-16 06:16:55.710706] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59407 ]
00:06:39.088  [2024-12-16 06:16:55.838092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:39.088  [2024-12-16 06:16:55.900808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:40.466   06:16:57	-- accel/accel.sh@18 -- # out='Preparing input file...
00:06:40.466  
00:06:40.466  SPDK Configuration:
00:06:40.466  Core mask:      0x1
00:06:40.466  
00:06:40.466  Accel Perf Configuration:
00:06:40.466  Workload Type:  decompress
00:06:40.466  Transfer size:  111250 bytes
00:06:40.466  Vector count    1
00:06:40.466  Module:         software
00:06:40.466  File Name:      /home/vagrant/spdk_repo/spdk/test/accel/bib
00:06:40.466  Queue depth:    32
00:06:40.466  Allocate depth: 32
00:06:40.466  # threads/core: 2
00:06:40.466  Run time:       1 seconds
00:06:40.466  Verify:         Yes
00:06:40.466  
00:06:40.466  Running for 1 seconds...
00:06:40.466  
00:06:40.466  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:06:40.466  ------------------------------------------------------------------------------------
00:06:40.466  0,1                        2880/s        118 MiB/s                0                0
00:06:40.466  0,0                        2848/s        117 MiB/s                0                0
00:06:40.466  ====================================================================================
00:06:40.466  Total                      5728/s        607 MiB/s                0                0'
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # IFS=:
00:06:40.466    06:16:57	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # read -r var val
00:06:40.466    06:16:57	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2
00:06:40.466     06:16:57	-- accel/accel.sh@12 -- # build_accel_config
00:06:40.466     06:16:57	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:40.466     06:16:57	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:40.466     06:16:57	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:40.466     06:16:57	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:40.466     06:16:57	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:40.466     06:16:57	-- accel/accel.sh@41 -- # local IFS=,
00:06:40.466     06:16:57	-- accel/accel.sh@42 -- # jq -r .
00:06:40.466  [2024-12-16 06:16:57.160448] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:40.466  [2024-12-16 06:16:57.160571] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59426 ]
00:06:40.466  [2024-12-16 06:16:57.292331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:40.466  [2024-12-16 06:16:57.356054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:40.466   06:16:57	-- accel/accel.sh@21 -- # val=
00:06:40.466   06:16:57	-- accel/accel.sh@22 -- # case "$var" in
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # IFS=:
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # read -r var val
00:06:40.466   06:16:57	-- accel/accel.sh@21 -- # val=
00:06:40.466   06:16:57	-- accel/accel.sh@22 -- # case "$var" in
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # IFS=:
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # read -r var val
00:06:40.466   06:16:57	-- accel/accel.sh@21 -- # val=
00:06:40.466   06:16:57	-- accel/accel.sh@22 -- # case "$var" in
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # IFS=:
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # read -r var val
00:06:40.466   06:16:57	-- accel/accel.sh@21 -- # val=0x1
00:06:40.466   06:16:57	-- accel/accel.sh@22 -- # case "$var" in
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # IFS=:
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # read -r var val
00:06:40.466   06:16:57	-- accel/accel.sh@21 -- # val=
00:06:40.466   06:16:57	-- accel/accel.sh@22 -- # case "$var" in
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # IFS=:
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # read -r var val
00:06:40.466   06:16:57	-- accel/accel.sh@21 -- # val=
00:06:40.466   06:16:57	-- accel/accel.sh@22 -- # case "$var" in
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # IFS=:
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # read -r var val
00:06:40.466   06:16:57	-- accel/accel.sh@21 -- # val=decompress
00:06:40.466   06:16:57	-- accel/accel.sh@22 -- # case "$var" in
00:06:40.466   06:16:57	-- accel/accel.sh@24 -- # accel_opc=decompress
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # IFS=:
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # read -r var val
00:06:40.466   06:16:57	-- accel/accel.sh@21 -- # val='111250 bytes'
00:06:40.466   06:16:57	-- accel/accel.sh@22 -- # case "$var" in
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # IFS=:
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # read -r var val
00:06:40.466   06:16:57	-- accel/accel.sh@21 -- # val=
00:06:40.466   06:16:57	-- accel/accel.sh@22 -- # case "$var" in
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # IFS=:
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # read -r var val
00:06:40.466   06:16:57	-- accel/accel.sh@21 -- # val=software
00:06:40.466   06:16:57	-- accel/accel.sh@22 -- # case "$var" in
00:06:40.466   06:16:57	-- accel/accel.sh@23 -- # accel_module=software
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # IFS=:
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # read -r var val
00:06:40.466   06:16:57	-- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib
00:06:40.466   06:16:57	-- accel/accel.sh@22 -- # case "$var" in
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # IFS=:
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # read -r var val
00:06:40.466   06:16:57	-- accel/accel.sh@21 -- # val=32
00:06:40.466   06:16:57	-- accel/accel.sh@22 -- # case "$var" in
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # IFS=:
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # read -r var val
00:06:40.466   06:16:57	-- accel/accel.sh@21 -- # val=32
00:06:40.466   06:16:57	-- accel/accel.sh@22 -- # case "$var" in
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # IFS=:
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # read -r var val
00:06:40.466   06:16:57	-- accel/accel.sh@21 -- # val=2
00:06:40.466   06:16:57	-- accel/accel.sh@22 -- # case "$var" in
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # IFS=:
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # read -r var val
00:06:40.466   06:16:57	-- accel/accel.sh@21 -- # val='1 seconds'
00:06:40.466   06:16:57	-- accel/accel.sh@22 -- # case "$var" in
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # IFS=:
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # read -r var val
00:06:40.466   06:16:57	-- accel/accel.sh@21 -- # val=Yes
00:06:40.466   06:16:57	-- accel/accel.sh@22 -- # case "$var" in
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # IFS=:
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # read -r var val
00:06:40.466   06:16:57	-- accel/accel.sh@21 -- # val=
00:06:40.466   06:16:57	-- accel/accel.sh@22 -- # case "$var" in
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # IFS=:
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # read -r var val
00:06:40.466   06:16:57	-- accel/accel.sh@21 -- # val=
00:06:40.466   06:16:57	-- accel/accel.sh@22 -- # case "$var" in
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # IFS=:
00:06:40.466   06:16:57	-- accel/accel.sh@20 -- # read -r var val
00:06:41.844   06:16:58	-- accel/accel.sh@21 -- # val=
00:06:41.844   06:16:58	-- accel/accel.sh@22 -- # case "$var" in
00:06:41.844   06:16:58	-- accel/accel.sh@20 -- # IFS=:
00:06:41.844   06:16:58	-- accel/accel.sh@20 -- # read -r var val
00:06:41.844   06:16:58	-- accel/accel.sh@21 -- # val=
00:06:41.844   06:16:58	-- accel/accel.sh@22 -- # case "$var" in
00:06:41.844   06:16:58	-- accel/accel.sh@20 -- # IFS=:
00:06:41.844   06:16:58	-- accel/accel.sh@20 -- # read -r var val
00:06:41.844   06:16:58	-- accel/accel.sh@21 -- # val=
00:06:41.844   06:16:58	-- accel/accel.sh@22 -- # case "$var" in
00:06:41.844   06:16:58	-- accel/accel.sh@20 -- # IFS=:
00:06:41.844   06:16:58	-- accel/accel.sh@20 -- # read -r var val
00:06:41.844   06:16:58	-- accel/accel.sh@21 -- # val=
00:06:41.844   06:16:58	-- accel/accel.sh@22 -- # case "$var" in
00:06:41.844   06:16:58	-- accel/accel.sh@20 -- # IFS=:
00:06:41.844   06:16:58	-- accel/accel.sh@20 -- # read -r var val
00:06:41.844   06:16:58	-- accel/accel.sh@21 -- # val=
00:06:41.844   06:16:58	-- accel/accel.sh@22 -- # case "$var" in
00:06:41.844   06:16:58	-- accel/accel.sh@20 -- # IFS=:
00:06:41.844   06:16:58	-- accel/accel.sh@20 -- # read -r var val
00:06:41.844   06:16:58	-- accel/accel.sh@21 -- # val=
00:06:41.844   06:16:58	-- accel/accel.sh@22 -- # case "$var" in
00:06:41.844   06:16:58	-- accel/accel.sh@20 -- # IFS=:
00:06:41.844   06:16:58	-- accel/accel.sh@20 -- # read -r var val
00:06:41.844   06:16:58	-- accel/accel.sh@21 -- # val=
00:06:41.844   06:16:58	-- accel/accel.sh@22 -- # case "$var" in
00:06:41.844   06:16:58	-- accel/accel.sh@20 -- # IFS=:
00:06:41.844   06:16:58	-- accel/accel.sh@20 -- # read -r var val
00:06:41.844   06:16:58	-- accel/accel.sh@28 -- # [[ -n software ]]
00:06:41.844   06:16:58	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:06:41.844   06:16:58	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:06:41.844  ************************************
00:06:41.844  END TEST accel_deomp_full_mthread
00:06:41.844  ************************************
00:06:41.844  
00:06:41.844  real	0m2.915s
00:06:41.844  user	0m2.513s
00:06:41.844  sys	0m0.201s
00:06:41.844   06:16:58	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:41.844   06:16:58	-- common/autotest_common.sh@10 -- # set +x
00:06:41.844   06:16:58	-- accel/accel.sh@116 -- # [[ n == y ]]
00:06:41.844   06:16:58	-- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62
00:06:41.844    06:16:58	-- accel/accel.sh@129 -- # build_accel_config
00:06:41.844   06:16:58	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:06:41.844    06:16:58	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:06:41.844   06:16:58	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:41.844    06:16:58	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:06:41.844   06:16:58	-- common/autotest_common.sh@10 -- # set +x
00:06:41.844    06:16:58	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:06:41.844    06:16:58	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:06:41.844    06:16:58	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:06:41.844    06:16:58	-- accel/accel.sh@41 -- # local IFS=,
00:06:41.844    06:16:58	-- accel/accel.sh@42 -- # jq -r .
00:06:41.844  ************************************
00:06:41.844  START TEST accel_dif_functional_tests
00:06:41.844  ************************************
00:06:41.844   06:16:58	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62
00:06:41.844  [2024-12-16 06:16:58.707004] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:41.844  [2024-12-16 06:16:58.707114] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59462 ]
00:06:42.104  [2024-12-16 06:16:58.843343] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:06:42.104  [2024-12-16 06:16:58.927874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:06:42.104  [2024-12-16 06:16:58.928015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:06:42.104  [2024-12-16 06:16:58.928017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:42.104  
00:06:42.104  
00:06:42.104       CUnit - A unit testing framework for C - Version 2.1-3
00:06:42.104       http://cunit.sourceforge.net/
00:06:42.104  
00:06:42.104  
00:06:42.104  Suite: accel_dif
00:06:42.104    Test: verify: DIF generated, GUARD check ...passed
00:06:42.104    Test: verify: DIF generated, APPTAG check ...passed
00:06:42.104    Test: verify: DIF generated, REFTAG check ...passed
00:06:42.104    Test: verify: DIF not generated, GUARD check ...passed
00:06:42.104    Test: verify: DIF not generated, APPTAG check ...[2024-12-16 06:16:59.014344] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10,  Expected=5a5a, Actual=7867
00:06:42.104  [2024-12-16 06:16:59.014479] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10,  Expected=5a5a, Actual=7867
00:06:42.104  passed
00:06:42.104    Test: verify: DIF not generated, REFTAG check ...[2024-12-16 06:16:59.014596] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10,  Expected=14, Actual=5a5a
00:06:42.104  [2024-12-16 06:16:59.014666] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10,  Expected=14, Actual=5a5a
00:06:42.104  [2024-12-16 06:16:59.014700] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a
00:06:42.104  passed
00:06:42.104    Test: verify: APPTAG correct, APPTAG check ...[2024-12-16 06:16:59.014790] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a
00:06:42.104  passed
00:06:42.104    Test: verify: APPTAG incorrect, APPTAG check ...[2024-12-16 06:16:59.014958] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30,  Expected=28, Actual=14
00:06:42.104  passed
00:06:42.104    Test: verify: APPTAG incorrect, no APPTAG check ...passed
00:06:42.104    Test: verify: REFTAG incorrect, REFTAG ignore ...passed
00:06:42.104    Test: verify: REFTAG_INIT correct, REFTAG check ...passed
00:06:42.104    Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-12-16 06:16:59.015290] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10
00:06:42.104  passed
00:06:42.104    Test: generate copy: DIF generated, GUARD check ...passed
00:06:42.104    Test: generate copy: DIF generated, APTTAG check ...passed
00:06:42.104    Test: generate copy: DIF generated, REFTAG check ...passed
00:06:42.104    Test: generate copy: DIF generated, no GUARD check flag set ...passed
00:06:42.104    Test: generate copy: DIF generated, no APPTAG check flag set ...passed
00:06:42.104    Test: generate copy: DIF generated, no REFTAG check flag set ...passed
00:06:42.104    Test: generate copy: iovecs-len validate ...[2024-12-16 06:16:59.015911] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size.
00:06:42.104  passed
00:06:42.104    Test: generate copy: buffer alignment validate ...passed
00:06:42.104  
00:06:42.104  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:42.104                suites      1      1    n/a      0        0
00:06:42.104                 tests     20     20     20      0        0
00:06:42.104               asserts    204    204    204      0      n/a
00:06:42.104  
00:06:42.104  Elapsed time =    0.005 seconds
00:06:42.363  ************************************
00:06:42.363  END TEST accel_dif_functional_tests
00:06:42.363  ************************************
00:06:42.363  
00:06:42.363  real	0m0.564s
00:06:42.363  user	0m0.750s
00:06:42.363  sys	0m0.146s
00:06:42.363   06:16:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:42.363   06:16:59	-- common/autotest_common.sh@10 -- # set +x
00:06:42.363  ************************************
00:06:42.363  END TEST accel
00:06:42.363  ************************************
00:06:42.363  
00:06:42.363  real	1m2.703s
00:06:42.363  user	1m7.125s
00:06:42.363  sys	0m5.814s
00:06:42.363   06:16:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:42.363   06:16:59	-- common/autotest_common.sh@10 -- # set +x
00:06:42.363   06:16:59	-- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh
00:06:42.363   06:16:59	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:42.363   06:16:59	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:42.363   06:16:59	-- common/autotest_common.sh@10 -- # set +x
00:06:42.363  ************************************
00:06:42.363  START TEST accel_rpc
00:06:42.363  ************************************
00:06:42.363   06:16:59	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh
00:06:42.621  * Looking for test storage...
00:06:42.622  * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel
00:06:42.622    06:16:59	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:06:42.622     06:16:59	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:06:42.622     06:16:59	-- common/autotest_common.sh@1690 -- # lcov --version
00:06:42.622    06:16:59	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:06:42.622    06:16:59	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:06:42.622    06:16:59	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:06:42.622    06:16:59	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:06:42.622    06:16:59	-- scripts/common.sh@335 -- # IFS=.-:
00:06:42.622    06:16:59	-- scripts/common.sh@335 -- # read -ra ver1
00:06:42.622    06:16:59	-- scripts/common.sh@336 -- # IFS=.-:
00:06:42.622    06:16:59	-- scripts/common.sh@336 -- # read -ra ver2
00:06:42.622    06:16:59	-- scripts/common.sh@337 -- # local 'op=<'
00:06:42.622    06:16:59	-- scripts/common.sh@339 -- # ver1_l=2
00:06:42.622    06:16:59	-- scripts/common.sh@340 -- # ver2_l=1
00:06:42.622    06:16:59	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:06:42.622    06:16:59	-- scripts/common.sh@343 -- # case "$op" in
00:06:42.622    06:16:59	-- scripts/common.sh@344 -- # : 1
00:06:42.622    06:16:59	-- scripts/common.sh@363 -- # (( v = 0 ))
00:06:42.622    06:16:59	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:42.622     06:16:59	-- scripts/common.sh@364 -- # decimal 1
00:06:42.622     06:16:59	-- scripts/common.sh@352 -- # local d=1
00:06:42.622     06:16:59	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:42.622     06:16:59	-- scripts/common.sh@354 -- # echo 1
00:06:42.622    06:16:59	-- scripts/common.sh@364 -- # ver1[v]=1
00:06:42.622     06:16:59	-- scripts/common.sh@365 -- # decimal 2
00:06:42.622     06:16:59	-- scripts/common.sh@352 -- # local d=2
00:06:42.622     06:16:59	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:42.622     06:16:59	-- scripts/common.sh@354 -- # echo 2
00:06:42.622    06:16:59	-- scripts/common.sh@365 -- # ver2[v]=2
00:06:42.622    06:16:59	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:06:42.622    06:16:59	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:06:42.622    06:16:59	-- scripts/common.sh@367 -- # return 0
00:06:42.622  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:42.622    06:16:59	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:42.622    06:16:59	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:06:42.622  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:42.622  		--rc genhtml_branch_coverage=1
00:06:42.622  		--rc genhtml_function_coverage=1
00:06:42.622  		--rc genhtml_legend=1
00:06:42.622  		--rc geninfo_all_blocks=1
00:06:42.622  		--rc geninfo_unexecuted_blocks=1
00:06:42.622  		
00:06:42.622  		'
00:06:42.622    06:16:59	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:06:42.622  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:42.622  		--rc genhtml_branch_coverage=1
00:06:42.622  		--rc genhtml_function_coverage=1
00:06:42.622  		--rc genhtml_legend=1
00:06:42.622  		--rc geninfo_all_blocks=1
00:06:42.622  		--rc geninfo_unexecuted_blocks=1
00:06:42.622  		
00:06:42.622  		'
00:06:42.622    06:16:59	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:06:42.622  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:42.622  		--rc genhtml_branch_coverage=1
00:06:42.622  		--rc genhtml_function_coverage=1
00:06:42.622  		--rc genhtml_legend=1
00:06:42.622  		--rc geninfo_all_blocks=1
00:06:42.622  		--rc geninfo_unexecuted_blocks=1
00:06:42.622  		
00:06:42.622  		'
00:06:42.622    06:16:59	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:06:42.622  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:42.622  		--rc genhtml_branch_coverage=1
00:06:42.622  		--rc genhtml_function_coverage=1
00:06:42.622  		--rc genhtml_legend=1
00:06:42.622  		--rc geninfo_all_blocks=1
00:06:42.622  		--rc geninfo_unexecuted_blocks=1
00:06:42.622  		
00:06:42.622  		'
00:06:42.622   06:16:59	-- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:06:42.622   06:16:59	-- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=59536
00:06:42.622   06:16:59	-- accel/accel_rpc.sh@15 -- # waitforlisten 59536
00:06:42.622   06:16:59	-- common/autotest_common.sh@829 -- # '[' -z 59536 ']'
00:06:42.622   06:16:59	-- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc
00:06:42.622   06:16:59	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:42.622   06:16:59	-- common/autotest_common.sh@834 -- # local max_retries=100
00:06:42.622   06:16:59	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:42.622   06:16:59	-- common/autotest_common.sh@838 -- # xtrace_disable
00:06:42.622   06:16:59	-- common/autotest_common.sh@10 -- # set +x
00:06:42.622  [2024-12-16 06:16:59.553800] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:42.622  [2024-12-16 06:16:59.554119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59536 ]
00:06:42.881  [2024-12-16 06:16:59.685686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:42.881  [2024-12-16 06:16:59.759738] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:06:42.881  [2024-12-16 06:16:59.760119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:43.817   06:17:00	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:06:43.817   06:17:00	-- common/autotest_common.sh@862 -- # return 0
00:06:43.817   06:17:00	-- accel/accel_rpc.sh@45 -- # [[ y == y ]]
00:06:43.817   06:17:00	-- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]]
00:06:43.817   06:17:00	-- accel/accel_rpc.sh@49 -- # [[ y == y ]]
00:06:43.817   06:17:00	-- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]]
00:06:43.817   06:17:00	-- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite
00:06:43.817   06:17:00	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:43.817   06:17:00	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:43.817   06:17:00	-- common/autotest_common.sh@10 -- # set +x
00:06:43.817  ************************************
00:06:43.817  START TEST accel_assign_opcode
00:06:43.817  ************************************
00:06:43.817   06:17:00	-- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite
00:06:43.817   06:17:00	-- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect
00:06:43.817   06:17:00	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:43.817   06:17:00	-- common/autotest_common.sh@10 -- # set +x
00:06:43.817  [2024-12-16 06:17:00.496646] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect
00:06:43.817   06:17:00	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:43.817   06:17:00	-- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software
00:06:43.817   06:17:00	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:43.817   06:17:00	-- common/autotest_common.sh@10 -- # set +x
00:06:43.817  [2024-12-16 06:17:00.504657] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software
00:06:43.817   06:17:00	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:43.817   06:17:00	-- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init
00:06:43.817   06:17:00	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:43.817   06:17:00	-- common/autotest_common.sh@10 -- # set +x
00:06:43.817   06:17:00	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:43.817   06:17:00	-- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments
00:06:43.817   06:17:00	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:43.817   06:17:00	-- accel/accel_rpc.sh@42 -- # jq -r .copy
00:06:43.817   06:17:00	-- common/autotest_common.sh@10 -- # set +x
00:06:43.817   06:17:00	-- accel/accel_rpc.sh@42 -- # grep software
00:06:43.817   06:17:00	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:43.817  software
00:06:43.817  ************************************
00:06:43.817  END TEST accel_assign_opcode
00:06:43.817  ************************************
00:06:43.817  
00:06:43.817  real	0m0.274s
00:06:43.817  user	0m0.048s
00:06:43.817  sys	0m0.011s
00:06:43.817   06:17:00	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:43.818   06:17:00	-- common/autotest_common.sh@10 -- # set +x
00:06:44.076   06:17:00	-- accel/accel_rpc.sh@55 -- # killprocess 59536
00:06:44.076   06:17:00	-- common/autotest_common.sh@936 -- # '[' -z 59536 ']'
00:06:44.076   06:17:00	-- common/autotest_common.sh@940 -- # kill -0 59536
00:06:44.076    06:17:00	-- common/autotest_common.sh@941 -- # uname
00:06:44.076   06:17:00	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:06:44.077    06:17:00	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59536
00:06:44.077  killing process with pid 59536
00:06:44.077   06:17:00	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:06:44.077   06:17:00	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:06:44.077   06:17:00	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 59536'
00:06:44.077   06:17:00	-- common/autotest_common.sh@955 -- # kill 59536
00:06:44.077   06:17:00	-- common/autotest_common.sh@960 -- # wait 59536
00:06:44.335  
00:06:44.335  real	0m1.908s
00:06:44.335  user	0m1.944s
00:06:44.335  sys	0m0.472s
00:06:44.335   06:17:01	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:44.335  ************************************
00:06:44.335   06:17:01	-- common/autotest_common.sh@10 -- # set +x
00:06:44.335  END TEST accel_rpc
00:06:44.335  ************************************
00:06:44.335   06:17:01	-- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh
00:06:44.335   06:17:01	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:44.335   06:17:01	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:44.335   06:17:01	-- common/autotest_common.sh@10 -- # set +x
00:06:44.335  ************************************
00:06:44.335  START TEST app_cmdline
00:06:44.335  ************************************
00:06:44.335   06:17:01	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh
00:06:44.594  * Looking for test storage...
00:06:44.594  * Found test storage at /home/vagrant/spdk_repo/spdk/test/app
00:06:44.594    06:17:01	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:06:44.594     06:17:01	-- common/autotest_common.sh@1690 -- # lcov --version
00:06:44.594     06:17:01	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:06:44.594    06:17:01	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:06:44.594    06:17:01	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:06:44.594    06:17:01	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:06:44.594    06:17:01	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:06:44.594    06:17:01	-- scripts/common.sh@335 -- # IFS=.-:
00:06:44.594    06:17:01	-- scripts/common.sh@335 -- # read -ra ver1
00:06:44.594    06:17:01	-- scripts/common.sh@336 -- # IFS=.-:
00:06:44.594    06:17:01	-- scripts/common.sh@336 -- # read -ra ver2
00:06:44.594    06:17:01	-- scripts/common.sh@337 -- # local 'op=<'
00:06:44.594    06:17:01	-- scripts/common.sh@339 -- # ver1_l=2
00:06:44.594    06:17:01	-- scripts/common.sh@340 -- # ver2_l=1
00:06:44.594    06:17:01	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:06:44.594    06:17:01	-- scripts/common.sh@343 -- # case "$op" in
00:06:44.594    06:17:01	-- scripts/common.sh@344 -- # : 1
00:06:44.594    06:17:01	-- scripts/common.sh@363 -- # (( v = 0 ))
00:06:44.594    06:17:01	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:44.594     06:17:01	-- scripts/common.sh@364 -- # decimal 1
00:06:44.594     06:17:01	-- scripts/common.sh@352 -- # local d=1
00:06:44.594     06:17:01	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:44.594     06:17:01	-- scripts/common.sh@354 -- # echo 1
00:06:44.594    06:17:01	-- scripts/common.sh@364 -- # ver1[v]=1
00:06:44.594     06:17:01	-- scripts/common.sh@365 -- # decimal 2
00:06:44.595     06:17:01	-- scripts/common.sh@352 -- # local d=2
00:06:44.595     06:17:01	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:44.595     06:17:01	-- scripts/common.sh@354 -- # echo 2
00:06:44.595    06:17:01	-- scripts/common.sh@365 -- # ver2[v]=2
00:06:44.595    06:17:01	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:06:44.595    06:17:01	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:06:44.595    06:17:01	-- scripts/common.sh@367 -- # return 0
00:06:44.595    06:17:01	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:44.595    06:17:01	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:06:44.595  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:44.595  		--rc genhtml_branch_coverage=1
00:06:44.595  		--rc genhtml_function_coverage=1
00:06:44.595  		--rc genhtml_legend=1
00:06:44.595  		--rc geninfo_all_blocks=1
00:06:44.595  		--rc geninfo_unexecuted_blocks=1
00:06:44.595  		
00:06:44.595  		'
00:06:44.595    06:17:01	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:06:44.595  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:44.595  		--rc genhtml_branch_coverage=1
00:06:44.595  		--rc genhtml_function_coverage=1
00:06:44.595  		--rc genhtml_legend=1
00:06:44.595  		--rc geninfo_all_blocks=1
00:06:44.595  		--rc geninfo_unexecuted_blocks=1
00:06:44.595  		
00:06:44.595  		'
00:06:44.595    06:17:01	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:06:44.595  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:44.595  		--rc genhtml_branch_coverage=1
00:06:44.595  		--rc genhtml_function_coverage=1
00:06:44.595  		--rc genhtml_legend=1
00:06:44.595  		--rc geninfo_all_blocks=1
00:06:44.595  		--rc geninfo_unexecuted_blocks=1
00:06:44.595  		
00:06:44.595  		'
00:06:44.595    06:17:01	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:06:44.595  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:44.595  		--rc genhtml_branch_coverage=1
00:06:44.595  		--rc genhtml_function_coverage=1
00:06:44.595  		--rc genhtml_legend=1
00:06:44.595  		--rc geninfo_all_blocks=1
00:06:44.595  		--rc geninfo_unexecuted_blocks=1
00:06:44.595  		
00:06:44.595  		'
00:06:44.595   06:17:01	-- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT
00:06:44.595   06:17:01	-- app/cmdline.sh@17 -- # spdk_tgt_pid=59653
00:06:44.595   06:17:01	-- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods
00:06:44.595   06:17:01	-- app/cmdline.sh@18 -- # waitforlisten 59653
00:06:44.595   06:17:01	-- common/autotest_common.sh@829 -- # '[' -z 59653 ']'
00:06:44.595   06:17:01	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:44.595   06:17:01	-- common/autotest_common.sh@834 -- # local max_retries=100
00:06:44.595   06:17:01	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:44.595  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:44.595   06:17:01	-- common/autotest_common.sh@838 -- # xtrace_disable
00:06:44.595   06:17:01	-- common/autotest_common.sh@10 -- # set +x
00:06:44.595  [2024-12-16 06:17:01.513006] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:44.595  [2024-12-16 06:17:01.513288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59653 ]
00:06:44.853  [2024-12-16 06:17:01.650901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:44.853  [2024-12-16 06:17:01.723597] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:06:44.853  [2024-12-16 06:17:01.723991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:45.790   06:17:02	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:06:45.790   06:17:02	-- common/autotest_common.sh@862 -- # return 0
00:06:45.790   06:17:02	-- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version
00:06:45.790  {
00:06:45.790    "fields": {
00:06:45.790      "commit": "c13c99a5e",
00:06:45.790      "major": 24,
00:06:45.790      "minor": 1,
00:06:45.790      "patch": 1,
00:06:45.790      "suffix": "-pre"
00:06:45.790    },
00:06:45.790    "version": "SPDK v24.01.1-pre git sha1 c13c99a5e"
00:06:45.790  }
00:06:45.790   06:17:02	-- app/cmdline.sh@22 -- # expected_methods=()
00:06:45.790   06:17:02	-- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods")
00:06:45.790   06:17:02	-- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version")
00:06:45.790   06:17:02	-- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort))
00:06:45.790    06:17:02	-- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods
00:06:45.790    06:17:02	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:45.790    06:17:02	-- common/autotest_common.sh@10 -- # set +x
00:06:45.790    06:17:02	-- app/cmdline.sh@26 -- # jq -r '.[]'
00:06:45.790    06:17:02	-- app/cmdline.sh@26 -- # sort
00:06:46.048    06:17:02	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:46.048   06:17:02	-- app/cmdline.sh@27 -- # (( 2 == 2 ))
00:06:46.048   06:17:02	-- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]]
00:06:46.048   06:17:02	-- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:06:46.048   06:17:02	-- common/autotest_common.sh@650 -- # local es=0
00:06:46.048   06:17:02	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:06:46.048   06:17:02	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:06:46.048   06:17:02	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:06:46.048    06:17:02	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:06:46.048   06:17:02	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:06:46.048    06:17:02	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:06:46.048   06:17:02	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:06:46.048   06:17:02	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:06:46.048   06:17:02	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:06:46.048   06:17:02	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:06:46.308  2024/12/16 06:17:03 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found
00:06:46.308  request:
00:06:46.308  {
00:06:46.308    "method": "env_dpdk_get_mem_stats",
00:06:46.308    "params": {}
00:06:46.308  }
00:06:46.308  Got JSON-RPC error response
00:06:46.308  GoRPCClient: error on JSON-RPC call
00:06:46.308   06:17:03	-- common/autotest_common.sh@653 -- # es=1
00:06:46.308   06:17:03	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:06:46.308   06:17:03	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:06:46.308   06:17:03	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:06:46.308   06:17:03	-- app/cmdline.sh@1 -- # killprocess 59653
00:06:46.308   06:17:03	-- common/autotest_common.sh@936 -- # '[' -z 59653 ']'
00:06:46.308   06:17:03	-- common/autotest_common.sh@940 -- # kill -0 59653
00:06:46.308    06:17:03	-- common/autotest_common.sh@941 -- # uname
00:06:46.308   06:17:03	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:06:46.308    06:17:03	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59653
00:06:46.308   06:17:03	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:06:46.308   06:17:03	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:06:46.308  killing process with pid 59653
00:06:46.308   06:17:03	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 59653'
00:06:46.308   06:17:03	-- common/autotest_common.sh@955 -- # kill 59653
00:06:46.308   06:17:03	-- common/autotest_common.sh@960 -- # wait 59653
00:06:46.566  ************************************
00:06:46.566  END TEST app_cmdline
00:06:46.566  ************************************
00:06:46.566  
00:06:46.566  real	0m2.228s
00:06:46.566  user	0m2.770s
00:06:46.566  sys	0m0.509s
00:06:46.566   06:17:03	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:46.566   06:17:03	-- common/autotest_common.sh@10 -- # set +x
00:06:46.566   06:17:03	-- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh
00:06:46.566   06:17:03	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:46.566   06:17:03	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:46.566   06:17:03	-- common/autotest_common.sh@10 -- # set +x
00:06:46.825  ************************************
00:06:46.825  START TEST version
00:06:46.825  ************************************
00:06:46.825   06:17:03	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh
00:06:46.825  * Looking for test storage...
00:06:46.825  * Found test storage at /home/vagrant/spdk_repo/spdk/test/app
00:06:46.825    06:17:03	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:06:46.825     06:17:03	-- common/autotest_common.sh@1690 -- # lcov --version
00:06:46.825     06:17:03	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:06:46.825    06:17:03	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:06:46.825    06:17:03	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:06:46.825    06:17:03	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:06:46.825    06:17:03	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:06:46.825    06:17:03	-- scripts/common.sh@335 -- # IFS=.-:
00:06:46.825    06:17:03	-- scripts/common.sh@335 -- # read -ra ver1
00:06:46.825    06:17:03	-- scripts/common.sh@336 -- # IFS=.-:
00:06:46.825    06:17:03	-- scripts/common.sh@336 -- # read -ra ver2
00:06:46.825    06:17:03	-- scripts/common.sh@337 -- # local 'op=<'
00:06:46.825    06:17:03	-- scripts/common.sh@339 -- # ver1_l=2
00:06:46.825    06:17:03	-- scripts/common.sh@340 -- # ver2_l=1
00:06:46.825    06:17:03	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:06:46.825    06:17:03	-- scripts/common.sh@343 -- # case "$op" in
00:06:46.825    06:17:03	-- scripts/common.sh@344 -- # : 1
00:06:46.825    06:17:03	-- scripts/common.sh@363 -- # (( v = 0 ))
00:06:46.825    06:17:03	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:46.825     06:17:03	-- scripts/common.sh@364 -- # decimal 1
00:06:46.825     06:17:03	-- scripts/common.sh@352 -- # local d=1
00:06:46.825     06:17:03	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:46.825     06:17:03	-- scripts/common.sh@354 -- # echo 1
00:06:46.825    06:17:03	-- scripts/common.sh@364 -- # ver1[v]=1
00:06:46.825     06:17:03	-- scripts/common.sh@365 -- # decimal 2
00:06:46.825     06:17:03	-- scripts/common.sh@352 -- # local d=2
00:06:46.825     06:17:03	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:46.825     06:17:03	-- scripts/common.sh@354 -- # echo 2
00:06:46.826    06:17:03	-- scripts/common.sh@365 -- # ver2[v]=2
00:06:46.826    06:17:03	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:06:46.826    06:17:03	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:06:46.826    06:17:03	-- scripts/common.sh@367 -- # return 0
00:06:46.826    06:17:03	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:46.826    06:17:03	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:06:46.826  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:46.826  		--rc genhtml_branch_coverage=1
00:06:46.826  		--rc genhtml_function_coverage=1
00:06:46.826  		--rc genhtml_legend=1
00:06:46.826  		--rc geninfo_all_blocks=1
00:06:46.826  		--rc geninfo_unexecuted_blocks=1
00:06:46.826  		
00:06:46.826  		'
00:06:46.826    06:17:03	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:06:46.826  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:46.826  		--rc genhtml_branch_coverage=1
00:06:46.826  		--rc genhtml_function_coverage=1
00:06:46.826  		--rc genhtml_legend=1
00:06:46.826  		--rc geninfo_all_blocks=1
00:06:46.826  		--rc geninfo_unexecuted_blocks=1
00:06:46.826  		
00:06:46.826  		'
00:06:46.826    06:17:03	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:06:46.826  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:46.826  		--rc genhtml_branch_coverage=1
00:06:46.826  		--rc genhtml_function_coverage=1
00:06:46.826  		--rc genhtml_legend=1
00:06:46.826  		--rc geninfo_all_blocks=1
00:06:46.826  		--rc geninfo_unexecuted_blocks=1
00:06:46.826  		
00:06:46.826  		'
00:06:46.826    06:17:03	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:06:46.826  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:46.826  		--rc genhtml_branch_coverage=1
00:06:46.826  		--rc genhtml_function_coverage=1
00:06:46.826  		--rc genhtml_legend=1
00:06:46.826  		--rc geninfo_all_blocks=1
00:06:46.826  		--rc geninfo_unexecuted_blocks=1
00:06:46.826  		
00:06:46.826  		'
00:06:46.826    06:17:03	-- app/version.sh@17 -- # get_header_version major
00:06:46.826    06:17:03	-- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:06:46.826    06:17:03	-- app/version.sh@14 -- # cut -f2
00:06:46.826    06:17:03	-- app/version.sh@14 -- # tr -d '"'
00:06:46.826   06:17:03	-- app/version.sh@17 -- # major=24
00:06:46.826    06:17:03	-- app/version.sh@18 -- # get_header_version minor
00:06:46.826    06:17:03	-- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:06:46.826    06:17:03	-- app/version.sh@14 -- # cut -f2
00:06:46.826    06:17:03	-- app/version.sh@14 -- # tr -d '"'
00:06:46.826   06:17:03	-- app/version.sh@18 -- # minor=1
00:06:46.826    06:17:03	-- app/version.sh@19 -- # get_header_version patch
00:06:46.826    06:17:03	-- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:06:46.826    06:17:03	-- app/version.sh@14 -- # cut -f2
00:06:46.826    06:17:03	-- app/version.sh@14 -- # tr -d '"'
00:06:46.826   06:17:03	-- app/version.sh@19 -- # patch=1
00:06:46.826    06:17:03	-- app/version.sh@20 -- # get_header_version suffix
00:06:46.826    06:17:03	-- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:06:46.826    06:17:03	-- app/version.sh@14 -- # cut -f2
00:06:46.826    06:17:03	-- app/version.sh@14 -- # tr -d '"'
00:06:46.826   06:17:03	-- app/version.sh@20 -- # suffix=-pre
00:06:46.826   06:17:03	-- app/version.sh@22 -- # version=24.1
00:06:46.826   06:17:03	-- app/version.sh@25 -- # (( patch != 0 ))
00:06:46.826   06:17:03	-- app/version.sh@25 -- # version=24.1.1
00:06:46.826   06:17:03	-- app/version.sh@28 -- # version=24.1.1rc0
00:06:46.826   06:17:03	-- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:06:46.826    06:17:03	-- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)'
00:06:46.826   06:17:03	-- app/version.sh@30 -- # py_version=24.1.1rc0
00:06:46.826   06:17:03	-- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]]
00:06:46.826  
00:06:46.826  real	0m0.224s
00:06:46.826  user	0m0.139s
00:06:46.826  sys	0m0.124s
00:06:46.826   06:17:03	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:46.826   06:17:03	-- common/autotest_common.sh@10 -- # set +x
00:06:46.826  ************************************
00:06:46.826  END TEST version
00:06:46.826  ************************************
00:06:47.085   06:17:03	-- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']'
00:06:47.085    06:17:03	-- spdk/autotest.sh@191 -- # uname -s
00:06:47.085   06:17:03	-- spdk/autotest.sh@191 -- # [[ Linux == Linux ]]
00:06:47.085   06:17:03	-- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]]
00:06:47.085   06:17:03	-- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]]
00:06:47.085   06:17:03	-- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']'
00:06:47.085   06:17:03	-- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']'
00:06:47.085   06:17:03	-- spdk/autotest.sh@255 -- # timing_exit lib
00:06:47.085   06:17:03	-- common/autotest_common.sh@728 -- # xtrace_disable
00:06:47.085   06:17:03	-- common/autotest_common.sh@10 -- # set +x
00:06:47.085   06:17:03	-- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']'
00:06:47.085   06:17:03	-- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']'
00:06:47.085   06:17:03	-- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']'
00:06:47.085   06:17:03	-- spdk/autotest.sh@275 -- # export NET_TYPE
00:06:47.085   06:17:03	-- spdk/autotest.sh@278 -- # '[' tcp = rdma ']'
00:06:47.085   06:17:03	-- spdk/autotest.sh@281 -- # '[' tcp = tcp ']'
00:06:47.085   06:17:03	-- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp
00:06:47.085   06:17:03	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:06:47.085   06:17:03	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:47.085   06:17:03	-- common/autotest_common.sh@10 -- # set +x
00:06:47.085  ************************************
00:06:47.085  START TEST nvmf_tcp
00:06:47.085  ************************************
00:06:47.085   06:17:03	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp
00:06:47.085  * Looking for test storage...
00:06:47.085  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf
00:06:47.085    06:17:03	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:06:47.085     06:17:03	-- common/autotest_common.sh@1690 -- # lcov --version
00:06:47.085     06:17:03	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:06:47.085    06:17:04	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:06:47.085    06:17:04	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:06:47.085    06:17:04	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:06:47.085    06:17:04	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:06:47.085    06:17:04	-- scripts/common.sh@335 -- # IFS=.-:
00:06:47.085    06:17:04	-- scripts/common.sh@335 -- # read -ra ver1
00:06:47.085    06:17:04	-- scripts/common.sh@336 -- # IFS=.-:
00:06:47.085    06:17:04	-- scripts/common.sh@336 -- # read -ra ver2
00:06:47.085    06:17:04	-- scripts/common.sh@337 -- # local 'op=<'
00:06:47.085    06:17:04	-- scripts/common.sh@339 -- # ver1_l=2
00:06:47.085    06:17:04	-- scripts/common.sh@340 -- # ver2_l=1
00:06:47.085    06:17:04	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:06:47.085    06:17:04	-- scripts/common.sh@343 -- # case "$op" in
00:06:47.085    06:17:04	-- scripts/common.sh@344 -- # : 1
00:06:47.085    06:17:04	-- scripts/common.sh@363 -- # (( v = 0 ))
00:06:47.085    06:17:04	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:47.085     06:17:04	-- scripts/common.sh@364 -- # decimal 1
00:06:47.085     06:17:04	-- scripts/common.sh@352 -- # local d=1
00:06:47.085     06:17:04	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:47.085     06:17:04	-- scripts/common.sh@354 -- # echo 1
00:06:47.085    06:17:04	-- scripts/common.sh@364 -- # ver1[v]=1
00:06:47.085     06:17:04	-- scripts/common.sh@365 -- # decimal 2
00:06:47.085     06:17:04	-- scripts/common.sh@352 -- # local d=2
00:06:47.085     06:17:04	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:47.085     06:17:04	-- scripts/common.sh@354 -- # echo 2
00:06:47.085    06:17:04	-- scripts/common.sh@365 -- # ver2[v]=2
00:06:47.085    06:17:04	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:06:47.085    06:17:04	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:06:47.085    06:17:04	-- scripts/common.sh@367 -- # return 0
00:06:47.085    06:17:04	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:47.085    06:17:04	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:06:47.085  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:47.085  		--rc genhtml_branch_coverage=1
00:06:47.085  		--rc genhtml_function_coverage=1
00:06:47.085  		--rc genhtml_legend=1
00:06:47.085  		--rc geninfo_all_blocks=1
00:06:47.085  		--rc geninfo_unexecuted_blocks=1
00:06:47.085  		
00:06:47.085  		'
00:06:47.085    06:17:04	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:06:47.085  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:47.085  		--rc genhtml_branch_coverage=1
00:06:47.085  		--rc genhtml_function_coverage=1
00:06:47.085  		--rc genhtml_legend=1
00:06:47.085  		--rc geninfo_all_blocks=1
00:06:47.085  		--rc geninfo_unexecuted_blocks=1
00:06:47.085  		
00:06:47.085  		'
00:06:47.085    06:17:04	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:06:47.085  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:47.085  		--rc genhtml_branch_coverage=1
00:06:47.085  		--rc genhtml_function_coverage=1
00:06:47.085  		--rc genhtml_legend=1
00:06:47.085  		--rc geninfo_all_blocks=1
00:06:47.085  		--rc geninfo_unexecuted_blocks=1
00:06:47.085  		
00:06:47.085  		'
00:06:47.085    06:17:04	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:06:47.085  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:47.085  		--rc genhtml_branch_coverage=1
00:06:47.085  		--rc genhtml_function_coverage=1
00:06:47.085  		--rc genhtml_legend=1
00:06:47.085  		--rc geninfo_all_blocks=1
00:06:47.085  		--rc geninfo_unexecuted_blocks=1
00:06:47.085  		
00:06:47.085  		'
00:06:47.085    06:17:04	-- nvmf/nvmf.sh@10 -- # uname -s
00:06:47.085   06:17:04	-- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']'
00:06:47.085   06:17:04	-- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:06:47.086     06:17:04	-- nvmf/common.sh@7 -- # uname -s
00:06:47.086    06:17:04	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:06:47.086    06:17:04	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:06:47.086    06:17:04	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:06:47.086    06:17:04	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:06:47.086    06:17:04	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:06:47.086    06:17:04	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:06:47.086    06:17:04	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:06:47.086    06:17:04	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:06:47.086    06:17:04	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:06:47.086     06:17:04	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:06:47.345    06:17:04	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:06:47.345    06:17:04	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:06:47.345    06:17:04	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:06:47.345    06:17:04	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:06:47.345    06:17:04	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:06:47.345    06:17:04	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:06:47.345     06:17:04	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:06:47.345     06:17:04	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:06:47.345     06:17:04	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:06:47.345      06:17:04	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:47.345      06:17:04	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:47.345      06:17:04	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:47.345      06:17:04	-- paths/export.sh@5 -- # export PATH
00:06:47.346      06:17:04	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:47.346    06:17:04	-- nvmf/common.sh@46 -- # : 0
00:06:47.346    06:17:04	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:06:47.346    06:17:04	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:06:47.346    06:17:04	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:06:47.346    06:17:04	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:06:47.346    06:17:04	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:06:47.346    06:17:04	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:06:47.346    06:17:04	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:06:47.346    06:17:04	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:06:47.346   06:17:04	-- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT
00:06:47.346   06:17:04	-- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@")
00:06:47.346   06:17:04	-- nvmf/nvmf.sh@20 -- # timing_enter target
00:06:47.346   06:17:04	-- common/autotest_common.sh@722 -- # xtrace_disable
00:06:47.346   06:17:04	-- common/autotest_common.sh@10 -- # set +x
00:06:47.346   06:17:04	-- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]]
00:06:47.346   06:17:04	-- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp
00:06:47.346   06:17:04	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:06:47.346   06:17:04	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:47.346   06:17:04	-- common/autotest_common.sh@10 -- # set +x
00:06:47.346  ************************************
00:06:47.346  START TEST nvmf_example
00:06:47.346  ************************************
00:06:47.346   06:17:04	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp
00:06:47.346  * Looking for test storage...
00:06:47.346  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:06:47.346    06:17:04	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:06:47.346     06:17:04	-- common/autotest_common.sh@1690 -- # lcov --version
00:06:47.346     06:17:04	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:06:47.346    06:17:04	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:06:47.346    06:17:04	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:06:47.346    06:17:04	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:06:47.346    06:17:04	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:06:47.346    06:17:04	-- scripts/common.sh@335 -- # IFS=.-:
00:06:47.346    06:17:04	-- scripts/common.sh@335 -- # read -ra ver1
00:06:47.346    06:17:04	-- scripts/common.sh@336 -- # IFS=.-:
00:06:47.346    06:17:04	-- scripts/common.sh@336 -- # read -ra ver2
00:06:47.346    06:17:04	-- scripts/common.sh@337 -- # local 'op=<'
00:06:47.346    06:17:04	-- scripts/common.sh@339 -- # ver1_l=2
00:06:47.346    06:17:04	-- scripts/common.sh@340 -- # ver2_l=1
00:06:47.346    06:17:04	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:06:47.346    06:17:04	-- scripts/common.sh@343 -- # case "$op" in
00:06:47.346    06:17:04	-- scripts/common.sh@344 -- # : 1
00:06:47.346    06:17:04	-- scripts/common.sh@363 -- # (( v = 0 ))
00:06:47.346    06:17:04	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:47.346     06:17:04	-- scripts/common.sh@364 -- # decimal 1
00:06:47.346     06:17:04	-- scripts/common.sh@352 -- # local d=1
00:06:47.346     06:17:04	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:47.346     06:17:04	-- scripts/common.sh@354 -- # echo 1
00:06:47.346    06:17:04	-- scripts/common.sh@364 -- # ver1[v]=1
00:06:47.346     06:17:04	-- scripts/common.sh@365 -- # decimal 2
00:06:47.346     06:17:04	-- scripts/common.sh@352 -- # local d=2
00:06:47.346     06:17:04	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:47.346     06:17:04	-- scripts/common.sh@354 -- # echo 2
00:06:47.346    06:17:04	-- scripts/common.sh@365 -- # ver2[v]=2
00:06:47.346    06:17:04	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:06:47.346    06:17:04	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:06:47.346    06:17:04	-- scripts/common.sh@367 -- # return 0
00:06:47.346    06:17:04	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:47.346    06:17:04	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:06:47.346  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:47.346  		--rc genhtml_branch_coverage=1
00:06:47.346  		--rc genhtml_function_coverage=1
00:06:47.346  		--rc genhtml_legend=1
00:06:47.346  		--rc geninfo_all_blocks=1
00:06:47.346  		--rc geninfo_unexecuted_blocks=1
00:06:47.346  		
00:06:47.346  		'
00:06:47.346    06:17:04	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:06:47.346  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:47.346  		--rc genhtml_branch_coverage=1
00:06:47.346  		--rc genhtml_function_coverage=1
00:06:47.346  		--rc genhtml_legend=1
00:06:47.346  		--rc geninfo_all_blocks=1
00:06:47.346  		--rc geninfo_unexecuted_blocks=1
00:06:47.346  		
00:06:47.346  		'
00:06:47.346    06:17:04	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:06:47.346  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:47.346  		--rc genhtml_branch_coverage=1
00:06:47.346  		--rc genhtml_function_coverage=1
00:06:47.346  		--rc genhtml_legend=1
00:06:47.346  		--rc geninfo_all_blocks=1
00:06:47.346  		--rc geninfo_unexecuted_blocks=1
00:06:47.346  		
00:06:47.346  		'
00:06:47.346    06:17:04	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:06:47.346  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:47.346  		--rc genhtml_branch_coverage=1
00:06:47.346  		--rc genhtml_function_coverage=1
00:06:47.346  		--rc genhtml_legend=1
00:06:47.346  		--rc geninfo_all_blocks=1
00:06:47.346  		--rc geninfo_unexecuted_blocks=1
00:06:47.346  		
00:06:47.346  		'
00:06:47.346   06:17:04	-- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:06:47.346     06:17:04	-- nvmf/common.sh@7 -- # uname -s
00:06:47.346    06:17:04	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:06:47.346    06:17:04	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:06:47.346    06:17:04	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:06:47.346    06:17:04	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:06:47.346    06:17:04	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:06:47.346    06:17:04	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:06:47.346    06:17:04	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:06:47.346    06:17:04	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:06:47.346    06:17:04	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:06:47.346     06:17:04	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:06:47.346    06:17:04	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:06:47.346    06:17:04	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:06:47.346    06:17:04	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:06:47.346    06:17:04	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:06:47.346    06:17:04	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:06:47.346    06:17:04	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:06:47.346     06:17:04	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:06:47.346     06:17:04	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:06:47.346     06:17:04	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:06:47.347      06:17:04	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:47.347      06:17:04	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:47.347      06:17:04	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:47.347      06:17:04	-- paths/export.sh@5 -- # export PATH
00:06:47.347      06:17:04	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:47.347    06:17:04	-- nvmf/common.sh@46 -- # : 0
00:06:47.347    06:17:04	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:06:47.347    06:17:04	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:06:47.347    06:17:04	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:06:47.347    06:17:04	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:06:47.347    06:17:04	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:06:47.347    06:17:04	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:06:47.347    06:17:04	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:06:47.347    06:17:04	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:06:47.347   06:17:04	-- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf")
00:06:47.347   06:17:04	-- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64
00:06:47.347   06:17:04	-- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512
00:06:47.347   06:17:04	-- target/nvmf_example.sh@24 -- # build_nvmf_example_args
00:06:47.347   06:17:04	-- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']'
00:06:47.347   06:17:04	-- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000)
00:06:47.347   06:17:04	-- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}")
00:06:47.347   06:17:04	-- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test
00:06:47.347   06:17:04	-- common/autotest_common.sh@722 -- # xtrace_disable
00:06:47.347   06:17:04	-- common/autotest_common.sh@10 -- # set +x
00:06:47.347   06:17:04	-- target/nvmf_example.sh@41 -- # nvmftestinit
00:06:47.347   06:17:04	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:06:47.347   06:17:04	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:06:47.347   06:17:04	-- nvmf/common.sh@436 -- # prepare_net_devs
00:06:47.347   06:17:04	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:06:47.347   06:17:04	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:06:47.347   06:17:04	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:06:47.347   06:17:04	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:06:47.347    06:17:04	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:06:47.347   06:17:04	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:06:47.347   06:17:04	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:06:47.347   06:17:04	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:06:47.347   06:17:04	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:06:47.347   06:17:04	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:06:47.347   06:17:04	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:06:47.347   06:17:04	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:06:47.347   06:17:04	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:06:47.347   06:17:04	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:06:47.347   06:17:04	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:06:47.347   06:17:04	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:06:47.347   06:17:04	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:06:47.347   06:17:04	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:06:47.347   06:17:04	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:06:47.347   06:17:04	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:06:47.347   06:17:04	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:06:47.347   06:17:04	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:06:47.347   06:17:04	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:06:47.347   06:17:04	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:06:47.606  Cannot find device "nvmf_init_br"
00:06:47.606   06:17:04	-- nvmf/common.sh@153 -- # true
00:06:47.606   06:17:04	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:06:47.606  Cannot find device "nvmf_tgt_br"
00:06:47.606   06:17:04	-- nvmf/common.sh@154 -- # true
00:06:47.606   06:17:04	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:06:47.606  Cannot find device "nvmf_tgt_br2"
00:06:47.606   06:17:04	-- nvmf/common.sh@155 -- # true
00:06:47.606   06:17:04	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:06:47.606  Cannot find device "nvmf_init_br"
00:06:47.606   06:17:04	-- nvmf/common.sh@156 -- # true
00:06:47.606   06:17:04	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:06:47.606  Cannot find device "nvmf_tgt_br"
00:06:47.606   06:17:04	-- nvmf/common.sh@157 -- # true
00:06:47.606   06:17:04	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:06:47.606  Cannot find device "nvmf_tgt_br2"
00:06:47.606   06:17:04	-- nvmf/common.sh@158 -- # true
00:06:47.606   06:17:04	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:06:47.606  Cannot find device "nvmf_br"
00:06:47.606   06:17:04	-- nvmf/common.sh@159 -- # true
00:06:47.606   06:17:04	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:06:47.606  Cannot find device "nvmf_init_if"
00:06:47.606   06:17:04	-- nvmf/common.sh@160 -- # true
00:06:47.606   06:17:04	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:06:47.606  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:06:47.606   06:17:04	-- nvmf/common.sh@161 -- # true
00:06:47.606   06:17:04	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:06:47.606  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:06:47.606   06:17:04	-- nvmf/common.sh@162 -- # true
00:06:47.606   06:17:04	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:06:47.606   06:17:04	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:06:47.606   06:17:04	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:06:47.606   06:17:04	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:06:47.606   06:17:04	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:06:47.606   06:17:04	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:06:47.606   06:17:04	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:06:47.606   06:17:04	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:06:47.606   06:17:04	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:06:47.606   06:17:04	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:06:47.606   06:17:04	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:06:47.606   06:17:04	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:06:47.606   06:17:04	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:06:47.606   06:17:04	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:06:47.606   06:17:04	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:06:47.606   06:17:04	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:06:47.606   06:17:04	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:06:47.865   06:17:04	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:06:47.865   06:17:04	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:06:47.865   06:17:04	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:06:47.865   06:17:04	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:06:47.865   06:17:04	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:06:47.865   06:17:04	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:06:47.865   06:17:04	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:06:47.865  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:06:47.865  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms
00:06:47.865  
00:06:47.865  --- 10.0.0.2 ping statistics ---
00:06:47.865  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:06:47.865  rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms
00:06:47.865   06:17:04	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:06:47.865  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:06:47.865  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms
00:06:47.865  
00:06:47.865  --- 10.0.0.3 ping statistics ---
00:06:47.865  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:06:47.865  rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms
00:06:47.865   06:17:04	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:06:47.865  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:06:47.865  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms
00:06:47.865  
00:06:47.865  --- 10.0.0.1 ping statistics ---
00:06:47.865  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:06:47.865  rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms
00:06:47.865   06:17:04	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:06:47.865   06:17:04	-- nvmf/common.sh@421 -- # return 0
00:06:47.865   06:17:04	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:06:47.865   06:17:04	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:06:47.865   06:17:04	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:06:47.865   06:17:04	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:06:47.865   06:17:04	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:06:47.865   06:17:04	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:06:47.865   06:17:04	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:06:47.865   06:17:04	-- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF'
00:06:47.865   06:17:04	-- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example
00:06:47.865   06:17:04	-- common/autotest_common.sh@722 -- # xtrace_disable
00:06:47.865   06:17:04	-- common/autotest_common.sh@10 -- # set +x
00:06:47.865   06:17:04	-- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']'
00:06:47.865   06:17:04	-- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}")
00:06:47.865   06:17:04	-- target/nvmf_example.sh@34 -- # nvmfpid=60034
00:06:47.865   06:17:04	-- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF
00:06:47.865   06:17:04	-- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:06:47.865   06:17:04	-- target/nvmf_example.sh@36 -- # waitforlisten 60034
00:06:47.865   06:17:04	-- common/autotest_common.sh@829 -- # '[' -z 60034 ']'
00:06:47.865   06:17:04	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:47.865   06:17:04	-- common/autotest_common.sh@834 -- # local max_retries=100
00:06:47.865  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:47.865   06:17:04	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:47.865   06:17:04	-- common/autotest_common.sh@838 -- # xtrace_disable
00:06:47.865   06:17:04	-- common/autotest_common.sh@10 -- # set +x
00:06:49.250   06:17:05	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:06:49.250   06:17:05	-- common/autotest_common.sh@862 -- # return 0
00:06:49.250   06:17:05	-- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example
00:06:49.250   06:17:05	-- common/autotest_common.sh@728 -- # xtrace_disable
00:06:49.250   06:17:05	-- common/autotest_common.sh@10 -- # set +x
00:06:49.250   06:17:05	-- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:06:49.250   06:17:05	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:49.250   06:17:05	-- common/autotest_common.sh@10 -- # set +x
00:06:49.250   06:17:05	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:49.250    06:17:05	-- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512
00:06:49.250    06:17:05	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:49.250    06:17:05	-- common/autotest_common.sh@10 -- # set +x
00:06:49.250    06:17:05	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:49.250   06:17:05	-- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 '
00:06:49.250   06:17:05	-- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:06:49.250   06:17:05	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:49.250   06:17:05	-- common/autotest_common.sh@10 -- # set +x
00:06:49.250   06:17:05	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:49.250   06:17:05	-- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs
00:06:49.250   06:17:05	-- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:06:49.250   06:17:05	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:49.250   06:17:05	-- common/autotest_common.sh@10 -- # set +x
00:06:49.250   06:17:05	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:49.250   06:17:05	-- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:06:49.250   06:17:05	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:49.250   06:17:05	-- common/autotest_common.sh@10 -- # set +x
00:06:49.250   06:17:05	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:49.250   06:17:05	-- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf
00:06:49.250   06:17:05	-- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1'
00:06:59.225  Initializing NVMe Controllers
00:06:59.225  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:06:59.225  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:06:59.225  Initialization complete. Launching workers.
00:06:59.225  ========================================================
00:06:59.225                                                                                                               Latency(us)
00:06:59.225  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:06:59.225  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:   16582.40      64.77    3861.00     594.54   20203.36
00:06:59.225  ========================================================
00:06:59.225  Total                                                                    :   16582.40      64.77    3861.00     594.54   20203.36
00:06:59.225  
00:06:59.225   06:17:16	-- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT
00:06:59.225   06:17:16	-- target/nvmf_example.sh@66 -- # nvmftestfini
00:06:59.225   06:17:16	-- nvmf/common.sh@476 -- # nvmfcleanup
00:06:59.225   06:17:16	-- nvmf/common.sh@116 -- # sync
00:06:59.225   06:17:16	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:06:59.225   06:17:16	-- nvmf/common.sh@119 -- # set +e
00:06:59.225   06:17:16	-- nvmf/common.sh@120 -- # for i in {1..20}
00:06:59.225   06:17:16	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:06:59.225  rmmod nvme_tcp
00:06:59.225  rmmod nvme_fabrics
00:06:59.225  rmmod nvme_keyring
00:06:59.484   06:17:16	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:06:59.484   06:17:16	-- nvmf/common.sh@123 -- # set -e
00:06:59.484   06:17:16	-- nvmf/common.sh@124 -- # return 0
00:06:59.484   06:17:16	-- nvmf/common.sh@477 -- # '[' -n 60034 ']'
00:06:59.484   06:17:16	-- nvmf/common.sh@478 -- # killprocess 60034
00:06:59.484   06:17:16	-- common/autotest_common.sh@936 -- # '[' -z 60034 ']'
00:06:59.484   06:17:16	-- common/autotest_common.sh@940 -- # kill -0 60034
00:06:59.484    06:17:16	-- common/autotest_common.sh@941 -- # uname
00:06:59.484   06:17:16	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:06:59.484    06:17:16	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60034
00:06:59.484   06:17:16	-- common/autotest_common.sh@942 -- # process_name=nvmf
00:06:59.484   06:17:16	-- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']'
00:06:59.484  killing process with pid 60034
00:06:59.484   06:17:16	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 60034'
00:06:59.484   06:17:16	-- common/autotest_common.sh@955 -- # kill 60034
00:06:59.484   06:17:16	-- common/autotest_common.sh@960 -- # wait 60034
00:06:59.484  nvmf threads initialize successfully
00:06:59.484  bdev subsystem init successfully
00:06:59.484  created a nvmf target service
00:06:59.484  create targets's poll groups done
00:06:59.484  all subsystems of target started
00:06:59.484  nvmf target is running
00:06:59.484  all subsystems of target stopped
00:06:59.484  destroy targets's poll groups done
00:06:59.484  destroyed the nvmf target service
00:06:59.484  bdev subsystem finish successfully
00:06:59.484  nvmf threads destroy successfully
00:06:59.484   06:17:16	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:06:59.484   06:17:16	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:06:59.484   06:17:16	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:06:59.484   06:17:16	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:06:59.484   06:17:16	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:06:59.484   06:17:16	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:06:59.484   06:17:16	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:06:59.484    06:17:16	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:06:59.743   06:17:16	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:06:59.743   06:17:16	-- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test
00:06:59.743   06:17:16	-- common/autotest_common.sh@728 -- # xtrace_disable
00:06:59.743   06:17:16	-- common/autotest_common.sh@10 -- # set +x
00:06:59.743  
00:06:59.743  real	0m12.434s
00:06:59.743  user	0m44.662s
00:06:59.743  sys	0m1.968s
00:06:59.743   06:17:16	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:59.743   06:17:16	-- common/autotest_common.sh@10 -- # set +x
00:06:59.743  ************************************
00:06:59.743  END TEST nvmf_example
00:06:59.743  ************************************
00:06:59.743   06:17:16	-- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp
00:06:59.743   06:17:16	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:06:59.743   06:17:16	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:59.743   06:17:16	-- common/autotest_common.sh@10 -- # set +x
00:06:59.743  ************************************
00:06:59.743  START TEST nvmf_filesystem
00:06:59.743  ************************************
00:06:59.743   06:17:16	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp
00:06:59.743  * Looking for test storage...
00:06:59.743  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:06:59.743     06:17:16	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:06:59.743      06:17:16	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:06:59.743      06:17:16	-- common/autotest_common.sh@1690 -- # lcov --version
00:06:59.743     06:17:16	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:06:59.743     06:17:16	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:06:59.743     06:17:16	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:06:59.743     06:17:16	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:06:59.743     06:17:16	-- scripts/common.sh@335 -- # IFS=.-:
00:06:59.743     06:17:16	-- scripts/common.sh@335 -- # read -ra ver1
00:06:59.743     06:17:16	-- scripts/common.sh@336 -- # IFS=.-:
00:06:59.744     06:17:16	-- scripts/common.sh@336 -- # read -ra ver2
00:06:59.744     06:17:16	-- scripts/common.sh@337 -- # local 'op=<'
00:06:59.744     06:17:16	-- scripts/common.sh@339 -- # ver1_l=2
00:06:59.744     06:17:16	-- scripts/common.sh@340 -- # ver2_l=1
00:06:59.744     06:17:16	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:06:59.744     06:17:16	-- scripts/common.sh@343 -- # case "$op" in
00:06:59.744     06:17:16	-- scripts/common.sh@344 -- # : 1
00:06:59.744     06:17:16	-- scripts/common.sh@363 -- # (( v = 0 ))
00:06:59.744     06:17:16	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:59.744      06:17:16	-- scripts/common.sh@364 -- # decimal 1
00:06:59.744      06:17:16	-- scripts/common.sh@352 -- # local d=1
00:06:59.744      06:17:16	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:59.744      06:17:16	-- scripts/common.sh@354 -- # echo 1
00:07:00.004     06:17:16	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:00.004      06:17:16	-- scripts/common.sh@365 -- # decimal 2
00:07:00.004      06:17:16	-- scripts/common.sh@352 -- # local d=2
00:07:00.004      06:17:16	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:00.004      06:17:16	-- scripts/common.sh@354 -- # echo 2
00:07:00.004     06:17:16	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:00.004     06:17:16	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:00.004     06:17:16	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:00.004     06:17:16	-- scripts/common.sh@367 -- # return 0
00:07:00.004     06:17:16	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:00.004     06:17:16	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:00.004  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:00.004  		--rc genhtml_branch_coverage=1
00:07:00.004  		--rc genhtml_function_coverage=1
00:07:00.004  		--rc genhtml_legend=1
00:07:00.004  		--rc geninfo_all_blocks=1
00:07:00.004  		--rc geninfo_unexecuted_blocks=1
00:07:00.004  		
00:07:00.004  		'
00:07:00.004     06:17:16	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:00.004  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:00.004  		--rc genhtml_branch_coverage=1
00:07:00.004  		--rc genhtml_function_coverage=1
00:07:00.004  		--rc genhtml_legend=1
00:07:00.004  		--rc geninfo_all_blocks=1
00:07:00.004  		--rc geninfo_unexecuted_blocks=1
00:07:00.004  		
00:07:00.004  		'
00:07:00.004     06:17:16	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:00.004  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:00.004  		--rc genhtml_branch_coverage=1
00:07:00.004  		--rc genhtml_function_coverage=1
00:07:00.004  		--rc genhtml_legend=1
00:07:00.004  		--rc geninfo_all_blocks=1
00:07:00.004  		--rc geninfo_unexecuted_blocks=1
00:07:00.004  		
00:07:00.004  		'
00:07:00.004     06:17:16	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:00.004  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:00.004  		--rc genhtml_branch_coverage=1
00:07:00.004  		--rc genhtml_function_coverage=1
00:07:00.004  		--rc genhtml_legend=1
00:07:00.004  		--rc geninfo_all_blocks=1
00:07:00.004  		--rc geninfo_unexecuted_blocks=1
00:07:00.004  		
00:07:00.004  		'
00:07:00.004   06:17:16	-- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh
00:07:00.004    06:17:16	-- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd
00:07:00.004    06:17:16	-- common/autotest_common.sh@34 -- # set -e
00:07:00.004    06:17:16	-- common/autotest_common.sh@35 -- # shopt -s nullglob
00:07:00.004    06:17:16	-- common/autotest_common.sh@36 -- # shopt -s extglob
00:07:00.004    06:17:16	-- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]]
00:07:00.004    06:17:16	-- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh
00:07:00.004     06:17:16	-- common/build_config.sh@1 -- # CONFIG_WPDK_DIR=
00:07:00.004     06:17:16	-- common/build_config.sh@2 -- # CONFIG_ASAN=n
00:07:00.004     06:17:16	-- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n
00:07:00.004     06:17:16	-- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y
00:07:00.004     06:17:16	-- common/build_config.sh@5 -- # CONFIG_USDT=y
00:07:00.004     06:17:16	-- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n
00:07:00.004     06:17:16	-- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local
00:07:00.004     06:17:16	-- common/build_config.sh@8 -- # CONFIG_RBD=n
00:07:00.004     06:17:16	-- common/build_config.sh@9 -- # CONFIG_LIBDIR=
00:07:00.004     06:17:16	-- common/build_config.sh@10 -- # CONFIG_IDXD=y
00:07:00.004     06:17:16	-- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y
00:07:00.004     06:17:16	-- common/build_config.sh@12 -- # CONFIG_SMA=n
00:07:00.004     06:17:16	-- common/build_config.sh@13 -- # CONFIG_VTUNE=n
00:07:00.004     06:17:16	-- common/build_config.sh@14 -- # CONFIG_TSAN=n
00:07:00.004     06:17:16	-- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y
00:07:00.004     06:17:16	-- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR=
00:07:00.004     06:17:16	-- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n
00:07:00.004     06:17:16	-- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y
00:07:00.004     06:17:16	-- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:07:00.004     06:17:16	-- common/build_config.sh@20 -- # CONFIG_LTO=n
00:07:00.004     06:17:16	-- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y
00:07:00.004     06:17:16	-- common/build_config.sh@22 -- # CONFIG_CET=n
00:07:00.004     06:17:16	-- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n
00:07:00.004     06:17:16	-- common/build_config.sh@24 -- # CONFIG_OCF_PATH=
00:07:00.004     06:17:16	-- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y
00:07:00.004     06:17:16	-- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y
00:07:00.004     06:17:16	-- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n
00:07:00.004     06:17:16	-- common/build_config.sh@28 -- # CONFIG_UBLK=y
00:07:00.004     06:17:16	-- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y
00:07:00.005     06:17:16	-- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH=
00:07:00.005     06:17:16	-- common/build_config.sh@31 -- # CONFIG_OCF=n
00:07:00.005     06:17:16	-- common/build_config.sh@32 -- # CONFIG_FUSE=n
00:07:00.005     06:17:16	-- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR=
00:07:00.005     06:17:16	-- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=
00:07:00.005     06:17:16	-- common/build_config.sh@35 -- # CONFIG_FUZZER=n
00:07:00.005     06:17:16	-- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build
00:07:00.005     06:17:16	-- common/build_config.sh@37 -- # CONFIG_CRYPTO=n
00:07:00.005     06:17:16	-- common/build_config.sh@38 -- # CONFIG_PGO_USE=n
00:07:00.005     06:17:16	-- common/build_config.sh@39 -- # CONFIG_VHOST=y
00:07:00.005     06:17:16	-- common/build_config.sh@40 -- # CONFIG_DAOS=n
00:07:00.005     06:17:16	-- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=
00:07:00.005     06:17:16	-- common/build_config.sh@42 -- # CONFIG_DAOS_DIR=
00:07:00.005     06:17:16	-- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n
00:07:00.005     06:17:16	-- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y
00:07:00.005     06:17:16	-- common/build_config.sh@45 -- # CONFIG_VIRTIO=y
00:07:00.005     06:17:16	-- common/build_config.sh@46 -- # CONFIG_COVERAGE=y
00:07:00.005     06:17:16	-- common/build_config.sh@47 -- # CONFIG_RDMA=y
00:07:00.005     06:17:16	-- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:07:00.005     06:17:16	-- common/build_config.sh@49 -- # CONFIG_URING_PATH=
00:07:00.005     06:17:16	-- common/build_config.sh@50 -- # CONFIG_XNVME=n
00:07:00.005     06:17:16	-- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y
00:07:00.005     06:17:16	-- common/build_config.sh@52 -- # CONFIG_ARCH=native
00:07:00.005     06:17:16	-- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n
00:07:00.005     06:17:16	-- common/build_config.sh@54 -- # CONFIG_WERROR=y
00:07:00.005     06:17:16	-- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n
00:07:00.005     06:17:16	-- common/build_config.sh@56 -- # CONFIG_UBSAN=y
00:07:00.005     06:17:16	-- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR=
00:07:00.005     06:17:16	-- common/build_config.sh@58 -- # CONFIG_GOLANG=y
00:07:00.005     06:17:16	-- common/build_config.sh@59 -- # CONFIG_ISAL=y
00:07:00.005     06:17:16	-- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y
00:07:00.005     06:17:16	-- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=
00:07:00.005     06:17:16	-- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs
00:07:00.005     06:17:16	-- common/build_config.sh@63 -- # CONFIG_APPS=y
00:07:00.005     06:17:16	-- common/build_config.sh@64 -- # CONFIG_SHARED=y
00:07:00.005     06:17:16	-- common/build_config.sh@65 -- # CONFIG_FC_PATH=
00:07:00.005     06:17:16	-- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n
00:07:00.005     06:17:16	-- common/build_config.sh@67 -- # CONFIG_FC=n
00:07:00.005     06:17:16	-- common/build_config.sh@68 -- # CONFIG_AVAHI=y
00:07:00.005     06:17:16	-- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y
00:07:00.005     06:17:16	-- common/build_config.sh@70 -- # CONFIG_RAID5F=n
00:07:00.005     06:17:16	-- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y
00:07:00.005     06:17:16	-- common/build_config.sh@72 -- # CONFIG_TESTS=y
00:07:00.005     06:17:16	-- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n
00:07:00.005     06:17:16	-- common/build_config.sh@74 -- # CONFIG_MAX_LCORES=
00:07:00.005     06:17:16	-- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n
00:07:00.005     06:17:16	-- common/build_config.sh@76 -- # CONFIG_DEBUG=y
00:07:00.005     06:17:16	-- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n
00:07:00.005     06:17:16	-- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX=
00:07:00.005     06:17:16	-- common/build_config.sh@79 -- # CONFIG_URING=n
00:07:00.005    06:17:16	-- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:07:00.005       06:17:16	-- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:07:00.005      06:17:16	-- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common
00:07:00.005     06:17:16	-- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common
00:07:00.005     06:17:16	-- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk
00:07:00.005     06:17:16	-- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin
00:07:00.005     06:17:16	-- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app
00:07:00.005     06:17:16	-- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples
00:07:00.005     06:17:16	-- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz")
00:07:00.005     06:17:16	-- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt")
00:07:00.005     06:17:16	-- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt")
00:07:00.005     06:17:16	-- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost")
00:07:00.005     06:17:16	-- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd")
00:07:00.005     06:17:16	-- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt")
00:07:00.005     06:17:16	-- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]]
00:07:00.005     06:17:16	-- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H
00:07:00.005  #define SPDK_CONFIG_H
00:07:00.005  #define SPDK_CONFIG_APPS 1
00:07:00.005  #define SPDK_CONFIG_ARCH native
00:07:00.005  #undef SPDK_CONFIG_ASAN
00:07:00.005  #define SPDK_CONFIG_AVAHI 1
00:07:00.005  #undef SPDK_CONFIG_CET
00:07:00.005  #define SPDK_CONFIG_COVERAGE 1
00:07:00.005  #define SPDK_CONFIG_CROSS_PREFIX 
00:07:00.005  #undef SPDK_CONFIG_CRYPTO
00:07:00.005  #undef SPDK_CONFIG_CRYPTO_MLX5
00:07:00.005  #undef SPDK_CONFIG_CUSTOMOCF
00:07:00.005  #undef SPDK_CONFIG_DAOS
00:07:00.005  #define SPDK_CONFIG_DAOS_DIR 
00:07:00.005  #define SPDK_CONFIG_DEBUG 1
00:07:00.005  #undef SPDK_CONFIG_DPDK_COMPRESSDEV
00:07:00.005  #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build
00:07:00.005  #define SPDK_CONFIG_DPDK_INC_DIR 
00:07:00.005  #define SPDK_CONFIG_DPDK_LIB_DIR 
00:07:00.005  #undef SPDK_CONFIG_DPDK_PKG_CONFIG
00:07:00.005  #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:07:00.005  #define SPDK_CONFIG_EXAMPLES 1
00:07:00.005  #undef SPDK_CONFIG_FC
00:07:00.005  #define SPDK_CONFIG_FC_PATH 
00:07:00.005  #define SPDK_CONFIG_FIO_PLUGIN 1
00:07:00.005  #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio
00:07:00.005  #undef SPDK_CONFIG_FUSE
00:07:00.005  #undef SPDK_CONFIG_FUZZER
00:07:00.005  #define SPDK_CONFIG_FUZZER_LIB 
00:07:00.005  #define SPDK_CONFIG_GOLANG 1
00:07:00.005  #define SPDK_CONFIG_HAVE_ARC4RANDOM 1
00:07:00.005  #define SPDK_CONFIG_HAVE_EXECINFO_H 1
00:07:00.005  #undef SPDK_CONFIG_HAVE_LIBARCHIVE
00:07:00.005  #undef SPDK_CONFIG_HAVE_LIBBSD
00:07:00.005  #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1
00:07:00.005  #define SPDK_CONFIG_IDXD 1
00:07:00.005  #define SPDK_CONFIG_IDXD_KERNEL 1
00:07:00.005  #undef SPDK_CONFIG_IPSEC_MB
00:07:00.005  #define SPDK_CONFIG_IPSEC_MB_DIR 
00:07:00.005  #define SPDK_CONFIG_ISAL 1
00:07:00.005  #define SPDK_CONFIG_ISAL_CRYPTO 1
00:07:00.005  #define SPDK_CONFIG_ISCSI_INITIATOR 1
00:07:00.005  #define SPDK_CONFIG_LIBDIR 
00:07:00.005  #undef SPDK_CONFIG_LTO
00:07:00.005  #define SPDK_CONFIG_MAX_LCORES 
00:07:00.005  #define SPDK_CONFIG_NVME_CUSE 1
00:07:00.005  #undef SPDK_CONFIG_OCF
00:07:00.005  #define SPDK_CONFIG_OCF_PATH 
00:07:00.005  #define SPDK_CONFIG_OPENSSL_PATH 
00:07:00.005  #undef SPDK_CONFIG_PGO_CAPTURE
00:07:00.005  #undef SPDK_CONFIG_PGO_USE
00:07:00.005  #define SPDK_CONFIG_PREFIX /usr/local
00:07:00.005  #undef SPDK_CONFIG_RAID5F
00:07:00.005  #undef SPDK_CONFIG_RBD
00:07:00.005  #define SPDK_CONFIG_RDMA 1
00:07:00.005  #define SPDK_CONFIG_RDMA_PROV verbs
00:07:00.005  #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1
00:07:00.005  #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1
00:07:00.005  #define SPDK_CONFIG_RDMA_SET_TOS 1
00:07:00.005  #define SPDK_CONFIG_SHARED 1
00:07:00.005  #undef SPDK_CONFIG_SMA
00:07:00.005  #define SPDK_CONFIG_TESTS 1
00:07:00.005  #undef SPDK_CONFIG_TSAN
00:07:00.005  #define SPDK_CONFIG_UBLK 1
00:07:00.005  #define SPDK_CONFIG_UBSAN 1
00:07:00.005  #undef SPDK_CONFIG_UNIT_TESTS
00:07:00.005  #undef SPDK_CONFIG_URING
00:07:00.005  #define SPDK_CONFIG_URING_PATH 
00:07:00.005  #undef SPDK_CONFIG_URING_ZNS
00:07:00.005  #define SPDK_CONFIG_USDT 1
00:07:00.005  #undef SPDK_CONFIG_VBDEV_COMPRESS
00:07:00.005  #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5
00:07:00.005  #define SPDK_CONFIG_VFIO_USER 1
00:07:00.005  #define SPDK_CONFIG_VFIO_USER_DIR 
00:07:00.005  #define SPDK_CONFIG_VHOST 1
00:07:00.005  #define SPDK_CONFIG_VIRTIO 1
00:07:00.005  #undef SPDK_CONFIG_VTUNE
00:07:00.005  #define SPDK_CONFIG_VTUNE_DIR 
00:07:00.005  #define SPDK_CONFIG_WERROR 1
00:07:00.005  #define SPDK_CONFIG_WPDK_DIR 
00:07:00.005  #undef SPDK_CONFIG_XNVME
00:07:00.005  #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]]
00:07:00.005     06:17:16	-- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS ))
00:07:00.005    06:17:16	-- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:07:00.005     06:17:16	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:07:00.005     06:17:16	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:07:00.005     06:17:16	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:07:00.005      06:17:16	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:00.005      06:17:16	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:00.005      06:17:16	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:00.005      06:17:16	-- paths/export.sh@5 -- # export PATH
00:07:00.005      06:17:16	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:00.005    06:17:16	-- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:07:00.006       06:17:16	-- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:07:00.006      06:17:16	-- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:07:00.006     06:17:16	-- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:07:00.006      06:17:16	-- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../
00:07:00.006     06:17:16	-- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk
00:07:00.006     06:17:16	-- pm/common@16 -- # TEST_TAG=N/A
00:07:00.006     06:17:16	-- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name
00:07:00.006    06:17:16	-- common/autotest_common.sh@52 -- # : 1
00:07:00.006    06:17:16	-- common/autotest_common.sh@53 -- # export RUN_NIGHTLY
00:07:00.006    06:17:16	-- common/autotest_common.sh@56 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS
00:07:00.006    06:17:16	-- common/autotest_common.sh@58 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND
00:07:00.006    06:17:16	-- common/autotest_common.sh@60 -- # : 1
00:07:00.006    06:17:16	-- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST
00:07:00.006    06:17:16	-- common/autotest_common.sh@62 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST
00:07:00.006    06:17:16	-- common/autotest_common.sh@64 -- # :
00:07:00.006    06:17:16	-- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD
00:07:00.006    06:17:16	-- common/autotest_common.sh@66 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD
00:07:00.006    06:17:16	-- common/autotest_common.sh@68 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL
00:07:00.006    06:17:16	-- common/autotest_common.sh@70 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI
00:07:00.006    06:17:16	-- common/autotest_common.sh@72 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR
00:07:00.006    06:17:16	-- common/autotest_common.sh@74 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME
00:07:00.006    06:17:16	-- common/autotest_common.sh@76 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR
00:07:00.006    06:17:16	-- common/autotest_common.sh@78 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP
00:07:00.006    06:17:16	-- common/autotest_common.sh@80 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI
00:07:00.006    06:17:16	-- common/autotest_common.sh@82 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE
00:07:00.006    06:17:16	-- common/autotest_common.sh@84 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP
00:07:00.006    06:17:16	-- common/autotest_common.sh@86 -- # : 1
00:07:00.006    06:17:16	-- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF
00:07:00.006    06:17:16	-- common/autotest_common.sh@88 -- # : 1
00:07:00.006    06:17:16	-- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER
00:07:00.006    06:17:16	-- common/autotest_common.sh@90 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU
00:07:00.006    06:17:16	-- common/autotest_common.sh@92 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER
00:07:00.006    06:17:16	-- common/autotest_common.sh@94 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT
00:07:00.006    06:17:16	-- common/autotest_common.sh@96 -- # : tcp
00:07:00.006    06:17:16	-- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT
00:07:00.006    06:17:16	-- common/autotest_common.sh@98 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD
00:07:00.006    06:17:16	-- common/autotest_common.sh@100 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST
00:07:00.006    06:17:16	-- common/autotest_common.sh@102 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV
00:07:00.006    06:17:16	-- common/autotest_common.sh@104 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT
00:07:00.006    06:17:16	-- common/autotest_common.sh@106 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS
00:07:00.006    06:17:16	-- common/autotest_common.sh@108 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT
00:07:00.006    06:17:16	-- common/autotest_common.sh@110 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL
00:07:00.006    06:17:16	-- common/autotest_common.sh@112 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS
00:07:00.006    06:17:16	-- common/autotest_common.sh@114 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN
00:07:00.006    06:17:16	-- common/autotest_common.sh@116 -- # : 1
00:07:00.006    06:17:16	-- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN
00:07:00.006    06:17:16	-- common/autotest_common.sh@118 -- # :
00:07:00.006    06:17:16	-- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK
00:07:00.006    06:17:16	-- common/autotest_common.sh@120 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT
00:07:00.006    06:17:16	-- common/autotest_common.sh@122 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO
00:07:00.006    06:17:16	-- common/autotest_common.sh@124 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL
00:07:00.006    06:17:16	-- common/autotest_common.sh@126 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF
00:07:00.006    06:17:16	-- common/autotest_common.sh@128 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD
00:07:00.006    06:17:16	-- common/autotest_common.sh@130 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL
00:07:00.006    06:17:16	-- common/autotest_common.sh@132 -- # :
00:07:00.006    06:17:16	-- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK
00:07:00.006    06:17:16	-- common/autotest_common.sh@134 -- # : true
00:07:00.006    06:17:16	-- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X
00:07:00.006    06:17:16	-- common/autotest_common.sh@136 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5
00:07:00.006    06:17:16	-- common/autotest_common.sh@138 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@139 -- # export SPDK_TEST_URING
00:07:00.006    06:17:16	-- common/autotest_common.sh@140 -- # : 1
00:07:00.006    06:17:16	-- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT
00:07:00.006    06:17:16	-- common/autotest_common.sh@142 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO
00:07:00.006    06:17:16	-- common/autotest_common.sh@144 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER
00:07:00.006    06:17:16	-- common/autotest_common.sh@146 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD
00:07:00.006    06:17:16	-- common/autotest_common.sh@148 -- # :
00:07:00.006    06:17:16	-- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS
00:07:00.006    06:17:16	-- common/autotest_common.sh@150 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA
00:07:00.006    06:17:16	-- common/autotest_common.sh@152 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS
00:07:00.006    06:17:16	-- common/autotest_common.sh@154 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME
00:07:00.006    06:17:16	-- common/autotest_common.sh@156 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA
00:07:00.006    06:17:16	-- common/autotest_common.sh@158 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA
00:07:00.006    06:17:16	-- common/autotest_common.sh@160 -- # : 0
00:07:00.006    06:17:16	-- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT
00:07:00.006    06:17:16	-- common/autotest_common.sh@163 -- # :
00:07:00.006    06:17:16	-- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET
00:07:00.006    06:17:16	-- common/autotest_common.sh@165 -- # : 1
00:07:00.006    06:17:16	-- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS
00:07:00.006    06:17:16	-- common/autotest_common.sh@167 -- # : 1
00:07:00.006    06:17:16	-- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT
00:07:00.006    06:17:16	-- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:07:00.006    06:17:16	-- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:07:00.006    06:17:16	-- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib
00:07:00.006    06:17:16	-- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib
00:07:00.006    06:17:16	-- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:07:00.006    06:17:16	-- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:07:00.006    06:17:16	-- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:07:00.006    06:17:16	-- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:07:00.006    06:17:16	-- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes
00:07:00.006    06:17:16	-- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes
00:07:00.006    06:17:16	-- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:07:00.006    06:17:16	-- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:07:00.006    06:17:16	-- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1
00:07:00.006    06:17:16	-- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1
00:07:00.006    06:17:16	-- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:07:00.006    06:17:16	-- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:07:00.007    06:17:16	-- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:07:00.007    06:17:16	-- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:07:00.007    06:17:16	-- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file
00:07:00.007    06:17:16	-- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file
00:07:00.007    06:17:16	-- common/autotest_common.sh@196 -- # cat
00:07:00.007    06:17:16	-- common/autotest_common.sh@222 -- # echo leak:libfuse3.so
00:07:00.007    06:17:16	-- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:07:00.007    06:17:16	-- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:07:00.007    06:17:16	-- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:07:00.007    06:17:16	-- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:07:00.007    06:17:16	-- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']'
00:07:00.007    06:17:16	-- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR
00:07:00.007    06:17:16	-- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:07:00.007    06:17:16	-- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:07:00.007    06:17:16	-- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:07:00.007    06:17:16	-- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:07:00.007    06:17:16	-- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:07:00.007    06:17:16	-- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:07:00.007    06:17:16	-- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:07:00.007    06:17:16	-- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:07:00.007    06:17:16	-- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:07:00.007    06:17:16	-- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:07:00.007    06:17:16	-- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:07:00.007    06:17:16	-- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes
00:07:00.007    06:17:16	-- common/autotest_common.sh@247 -- # _LCOV_MAIN=0
00:07:00.007    06:17:16	-- common/autotest_common.sh@248 -- # _LCOV_LLVM=1
00:07:00.007    06:17:16	-- common/autotest_common.sh@249 -- # _LCOV=
00:07:00.007    06:17:16	-- common/autotest_common.sh@250 -- # [[ '' == *clang* ]]
00:07:00.007    06:17:16	-- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]]
00:07:00.007    06:17:16	-- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh'
00:07:00.007    06:17:16	-- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]=
00:07:00.007    06:17:16	-- common/autotest_common.sh@255 -- # lcov_opt=
00:07:00.007    06:17:16	-- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']'
00:07:00.007    06:17:16	-- common/autotest_common.sh@259 -- # export valgrind=
00:07:00.007    06:17:16	-- common/autotest_common.sh@259 -- # valgrind=
00:07:00.007     06:17:16	-- common/autotest_common.sh@265 -- # uname -s
00:07:00.007    06:17:16	-- common/autotest_common.sh@265 -- # '[' Linux = Linux ']'
00:07:00.007    06:17:16	-- common/autotest_common.sh@266 -- # HUGEMEM=4096
00:07:00.007    06:17:16	-- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes
00:07:00.007    06:17:16	-- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes
00:07:00.007    06:17:16	-- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]]
00:07:00.007    06:17:16	-- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]]
00:07:00.007    06:17:16	-- common/autotest_common.sh@275 -- # MAKE=make
00:07:00.007    06:17:16	-- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10
00:07:00.007    06:17:16	-- common/autotest_common.sh@292 -- # export HUGEMEM=4096
00:07:00.007    06:17:16	-- common/autotest_common.sh@292 -- # HUGEMEM=4096
00:07:00.007    06:17:16	-- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']'
00:07:00.007    06:17:16	-- common/autotest_common.sh@299 -- # NO_HUGE=()
00:07:00.007    06:17:16	-- common/autotest_common.sh@300 -- # TEST_MODE=
00:07:00.007    06:17:16	-- common/autotest_common.sh@301 -- # for i in "$@"
00:07:00.007    06:17:16	-- common/autotest_common.sh@302 -- # case "$i" in
00:07:00.007    06:17:16	-- common/autotest_common.sh@307 -- # TEST_TRANSPORT=tcp
00:07:00.007    06:17:16	-- common/autotest_common.sh@319 -- # [[ -z 60271 ]]
00:07:00.007    06:17:16	-- common/autotest_common.sh@319 -- # kill -0 60271
00:07:00.007    06:17:16	-- common/autotest_common.sh@1675 -- # set_test_storage 2147483648
00:07:00.007    06:17:16	-- common/autotest_common.sh@329 -- # [[ -v testdir ]]
00:07:00.007    06:17:16	-- common/autotest_common.sh@331 -- # local requested_size=2147483648
00:07:00.007    06:17:16	-- common/autotest_common.sh@332 -- # local mount target_dir
00:07:00.007    06:17:16	-- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses
00:07:00.007    06:17:16	-- common/autotest_common.sh@335 -- # local source fs size avail mount use
00:07:00.007    06:17:16	-- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates
00:07:00.007     06:17:16	-- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX
00:07:00.007    06:17:16	-- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.a5kcKb
00:07:00.007    06:17:16	-- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback")
00:07:00.007    06:17:16	-- common/autotest_common.sh@346 -- # [[ -n '' ]]
00:07:00.007    06:17:16	-- common/autotest_common.sh@351 -- # [[ -n '' ]]
00:07:00.007    06:17:16	-- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.a5kcKb/tests/target /tmp/spdk.a5kcKb
00:07:00.007    06:17:16	-- common/autotest_common.sh@359 -- # requested_size=2214592512
00:07:00.007    06:17:16	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:07:00.007     06:17:16	-- common/autotest_common.sh@328 -- # df -T
00:07:00.007     06:17:16	-- common/autotest_common.sh@328 -- # grep -v Filesystem
00:07:00.007    06:17:16	-- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5
00:07:00.007    06:17:16	-- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs
00:07:00.007    06:17:16	-- common/autotest_common.sh@363 -- # avails["$mount"]=14016274432
00:07:00.007    06:17:16	-- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848
00:07:00.007    06:17:16	-- common/autotest_common.sh@364 -- # uses["$mount"]=5551218688
00:07:00.007    06:17:16	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:07:00.007    06:17:16	-- common/autotest_common.sh@362 -- # mounts["$mount"]=devtmpfs
00:07:00.007    06:17:16	-- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs
00:07:00.007    06:17:16	-- common/autotest_common.sh@363 -- # avails["$mount"]=4194304
00:07:00.007    06:17:16	-- common/autotest_common.sh@363 -- # sizes["$mount"]=4194304
00:07:00.007    06:17:16	-- common/autotest_common.sh@364 -- # uses["$mount"]=0
00:07:00.007    06:17:16	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:07:00.007    06:17:16	-- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs
00:07:00.007    06:17:16	-- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs
00:07:00.007    06:17:16	-- common/autotest_common.sh@363 -- # avails["$mount"]=6265167872
00:07:00.007    06:17:16	-- common/autotest_common.sh@363 -- # sizes["$mount"]=6266425344
00:07:00.007    06:17:16	-- common/autotest_common.sh@364 -- # uses["$mount"]=1257472
00:07:00.007    06:17:16	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:07:00.007    06:17:16	-- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs
00:07:00.007    06:17:16	-- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs
00:07:00.007    06:17:16	-- common/autotest_common.sh@363 -- # avails["$mount"]=2493755392
00:07:00.007    06:17:16	-- common/autotest_common.sh@363 -- # sizes["$mount"]=2506571776
00:07:00.007    06:17:16	-- common/autotest_common.sh@364 -- # uses["$mount"]=12816384
00:07:00.007    06:17:16	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:07:00.007    06:17:16	-- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5
00:07:00.007    06:17:16	-- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs
00:07:00.007    06:17:16	-- common/autotest_common.sh@363 -- # avails["$mount"]=14016274432
00:07:00.007    06:17:16	-- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848
00:07:00.007    06:17:16	-- common/autotest_common.sh@364 -- # uses["$mount"]=5551218688
00:07:00.007    06:17:16	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:07:00.007    06:17:16	-- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs
00:07:00.007    06:17:16	-- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs
00:07:00.007    06:17:16	-- common/autotest_common.sh@363 -- # avails["$mount"]=6266290176
00:07:00.007    06:17:16	-- common/autotest_common.sh@363 -- # sizes["$mount"]=6266425344
00:07:00.007    06:17:16	-- common/autotest_common.sh@364 -- # uses["$mount"]=135168
00:07:00.007    06:17:16	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:07:00.007    06:17:16	-- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda2
00:07:00.007    06:17:16	-- common/autotest_common.sh@362 -- # fss["$mount"]=ext4
00:07:00.007    06:17:16	-- common/autotest_common.sh@363 -- # avails["$mount"]=840085504
00:07:00.007    06:17:16	-- common/autotest_common.sh@363 -- # sizes["$mount"]=1012768768
00:07:00.007    06:17:16	-- common/autotest_common.sh@364 -- # uses["$mount"]=103477248
00:07:00.007    06:17:16	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:07:00.007    06:17:16	-- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda3
00:07:00.007    06:17:16	-- common/autotest_common.sh@362 -- # fss["$mount"]=vfat
00:07:00.007    06:17:16	-- common/autotest_common.sh@363 -- # avails["$mount"]=91617280
00:07:00.007    06:17:16	-- common/autotest_common.sh@363 -- # sizes["$mount"]=104607744
00:07:00.007    06:17:16	-- common/autotest_common.sh@364 -- # uses["$mount"]=12990464
00:07:00.007    06:17:16	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:07:00.007    06:17:16	-- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs
00:07:00.007    06:17:16	-- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs
00:07:00.007    06:17:16	-- common/autotest_common.sh@363 -- # avails["$mount"]=1253269504
00:07:00.007    06:17:16	-- common/autotest_common.sh@363 -- # sizes["$mount"]=1253281792
00:07:00.007    06:17:16	-- common/autotest_common.sh@364 -- # uses["$mount"]=12288
00:07:00.007    06:17:16	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:07:00.007    06:17:16	-- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output
00:07:00.007    06:17:16	-- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs
00:07:00.007    06:17:16	-- common/autotest_common.sh@363 -- # avails["$mount"]=98019598336
00:07:00.007    06:17:16	-- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992
00:07:00.007    06:17:16	-- common/autotest_common.sh@364 -- # uses["$mount"]=1683181568
00:07:00.007    06:17:16	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:07:00.007    06:17:16	-- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n'
00:07:00.007  * Looking for test storage...
00:07:00.007    06:17:16	-- common/autotest_common.sh@369 -- # local target_space new_size
00:07:00.007    06:17:16	-- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}"
00:07:00.007     06:17:16	-- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}'
00:07:00.008     06:17:16	-- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:07:00.008    06:17:16	-- common/autotest_common.sh@373 -- # mount=/home
00:07:00.008    06:17:16	-- common/autotest_common.sh@375 -- # target_space=14016274432
00:07:00.008    06:17:16	-- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size ))
00:07:00.008    06:17:16	-- common/autotest_common.sh@379 -- # (( target_space >= requested_size ))
00:07:00.008    06:17:16	-- common/autotest_common.sh@381 -- # [[ btrfs == tmpfs ]]
00:07:00.008    06:17:16	-- common/autotest_common.sh@381 -- # [[ btrfs == ramfs ]]
00:07:00.008    06:17:16	-- common/autotest_common.sh@381 -- # [[ /home == / ]]
00:07:00.008    06:17:16	-- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target
00:07:00.008    06:17:16	-- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target
00:07:00.008    06:17:16	-- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:07:00.008  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:07:00.008    06:17:16	-- common/autotest_common.sh@390 -- # return 0
00:07:00.008    06:17:16	-- common/autotest_common.sh@1677 -- # set -o errtrace
00:07:00.008    06:17:16	-- common/autotest_common.sh@1678 -- # shopt -s extdebug
00:07:00.008    06:17:16	-- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR
00:07:00.008    06:17:16	-- common/autotest_common.sh@1681 -- # PS4=' \t	-- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ '
00:07:00.008    06:17:16	-- common/autotest_common.sh@1682 -- # true
00:07:00.008    06:17:16	-- common/autotest_common.sh@1684 -- # xtrace_fd
00:07:00.008    06:17:16	-- common/autotest_common.sh@25 -- # [[ -n 14 ]]
00:07:00.008    06:17:16	-- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]]
00:07:00.008    06:17:16	-- common/autotest_common.sh@27 -- # exec
00:07:00.008    06:17:16	-- common/autotest_common.sh@29 -- # exec
00:07:00.008    06:17:16	-- common/autotest_common.sh@31 -- # xtrace_restore
00:07:00.008    06:17:16	-- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]'
00:07:00.008    06:17:16	-- common/autotest_common.sh@17 -- # (( 0 == 0 ))
00:07:00.008    06:17:16	-- common/autotest_common.sh@18 -- # set -x
00:07:00.008    06:17:16	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:07:00.008     06:17:16	-- common/autotest_common.sh@1690 -- # lcov --version
00:07:00.008     06:17:16	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:07:00.008    06:17:16	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:07:00.008    06:17:16	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:07:00.008    06:17:16	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:07:00.008    06:17:16	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:07:00.008    06:17:16	-- scripts/common.sh@335 -- # IFS=.-:
00:07:00.008    06:17:16	-- scripts/common.sh@335 -- # read -ra ver1
00:07:00.008    06:17:16	-- scripts/common.sh@336 -- # IFS=.-:
00:07:00.008    06:17:16	-- scripts/common.sh@336 -- # read -ra ver2
00:07:00.008    06:17:16	-- scripts/common.sh@337 -- # local 'op=<'
00:07:00.008    06:17:16	-- scripts/common.sh@339 -- # ver1_l=2
00:07:00.008    06:17:16	-- scripts/common.sh@340 -- # ver2_l=1
00:07:00.008    06:17:16	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:07:00.008    06:17:16	-- scripts/common.sh@343 -- # case "$op" in
00:07:00.008    06:17:16	-- scripts/common.sh@344 -- # : 1
00:07:00.008    06:17:16	-- scripts/common.sh@363 -- # (( v = 0 ))
00:07:00.008    06:17:16	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:00.008     06:17:16	-- scripts/common.sh@364 -- # decimal 1
00:07:00.008     06:17:16	-- scripts/common.sh@352 -- # local d=1
00:07:00.008     06:17:16	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:00.008     06:17:16	-- scripts/common.sh@354 -- # echo 1
00:07:00.008    06:17:16	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:00.008     06:17:16	-- scripts/common.sh@365 -- # decimal 2
00:07:00.008     06:17:16	-- scripts/common.sh@352 -- # local d=2
00:07:00.008     06:17:16	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:00.008     06:17:16	-- scripts/common.sh@354 -- # echo 2
00:07:00.008    06:17:16	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:00.008    06:17:16	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:00.008    06:17:16	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:00.008    06:17:16	-- scripts/common.sh@367 -- # return 0
00:07:00.008    06:17:16	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:00.008    06:17:16	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:00.008  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:00.008  		--rc genhtml_branch_coverage=1
00:07:00.008  		--rc genhtml_function_coverage=1
00:07:00.008  		--rc genhtml_legend=1
00:07:00.008  		--rc geninfo_all_blocks=1
00:07:00.008  		--rc geninfo_unexecuted_blocks=1
00:07:00.008  		
00:07:00.008  		'
00:07:00.008    06:17:16	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:00.008  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:00.008  		--rc genhtml_branch_coverage=1
00:07:00.008  		--rc genhtml_function_coverage=1
00:07:00.008  		--rc genhtml_legend=1
00:07:00.008  		--rc geninfo_all_blocks=1
00:07:00.008  		--rc geninfo_unexecuted_blocks=1
00:07:00.008  		
00:07:00.008  		'
00:07:00.008    06:17:16	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:00.008  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:00.008  		--rc genhtml_branch_coverage=1
00:07:00.008  		--rc genhtml_function_coverage=1
00:07:00.008  		--rc genhtml_legend=1
00:07:00.008  		--rc geninfo_all_blocks=1
00:07:00.008  		--rc geninfo_unexecuted_blocks=1
00:07:00.008  		
00:07:00.008  		'
00:07:00.008    06:17:16	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:00.008  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:00.008  		--rc genhtml_branch_coverage=1
00:07:00.008  		--rc genhtml_function_coverage=1
00:07:00.008  		--rc genhtml_legend=1
00:07:00.008  		--rc geninfo_all_blocks=1
00:07:00.008  		--rc geninfo_unexecuted_blocks=1
00:07:00.008  		
00:07:00.008  		'
00:07:00.008   06:17:16	-- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:07:00.008     06:17:16	-- nvmf/common.sh@7 -- # uname -s
00:07:00.008    06:17:16	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:07:00.008    06:17:16	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:07:00.008    06:17:16	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:07:00.008    06:17:16	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:07:00.008    06:17:16	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:07:00.008    06:17:16	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:07:00.008    06:17:16	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:07:00.008    06:17:16	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:07:00.008    06:17:16	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:07:00.008     06:17:16	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:07:00.008    06:17:16	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:07:00.008    06:17:16	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:07:00.008    06:17:16	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:07:00.008    06:17:16	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:07:00.008    06:17:16	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:07:00.008    06:17:16	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:07:00.008     06:17:16	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:07:00.008     06:17:16	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:07:00.008     06:17:16	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:07:00.008      06:17:16	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:00.008      06:17:16	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:00.008      06:17:16	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:00.008      06:17:16	-- paths/export.sh@5 -- # export PATH
00:07:00.008      06:17:16	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:00.008    06:17:16	-- nvmf/common.sh@46 -- # : 0
00:07:00.008    06:17:16	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:07:00.008    06:17:16	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:07:00.008    06:17:16	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:07:00.008    06:17:16	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:07:00.008    06:17:16	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:07:00.008    06:17:16	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:07:00.008    06:17:16	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:07:00.008    06:17:16	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:07:00.008   06:17:16	-- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512
00:07:00.008   06:17:16	-- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512
00:07:00.008   06:17:16	-- target/filesystem.sh@15 -- # nvmftestinit
00:07:00.008   06:17:16	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:07:00.008   06:17:16	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:07:00.008   06:17:16	-- nvmf/common.sh@436 -- # prepare_net_devs
00:07:00.008   06:17:16	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:07:00.008   06:17:16	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:07:00.008   06:17:16	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:07:00.008   06:17:16	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:07:00.008    06:17:16	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:07:00.008   06:17:16	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:07:00.008   06:17:16	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:07:00.008   06:17:16	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:07:00.008   06:17:16	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:07:00.008   06:17:16	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:07:00.009   06:17:16	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:07:00.009   06:17:16	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:07:00.009   06:17:16	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:07:00.009   06:17:16	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:07:00.009   06:17:16	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:07:00.009   06:17:16	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:07:00.009   06:17:16	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:07:00.009   06:17:16	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:07:00.009   06:17:16	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:07:00.009   06:17:16	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:07:00.009   06:17:16	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:07:00.009   06:17:16	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:07:00.009   06:17:16	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:07:00.009   06:17:16	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:07:00.267   06:17:16	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:07:00.267  Cannot find device "nvmf_tgt_br"
00:07:00.267   06:17:16	-- nvmf/common.sh@154 -- # true
00:07:00.267   06:17:16	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:07:00.267  Cannot find device "nvmf_tgt_br2"
00:07:00.267   06:17:17	-- nvmf/common.sh@155 -- # true
00:07:00.267   06:17:17	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:07:00.267   06:17:17	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:07:00.267  Cannot find device "nvmf_tgt_br"
00:07:00.267   06:17:17	-- nvmf/common.sh@157 -- # true
00:07:00.267   06:17:17	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:07:00.267  Cannot find device "nvmf_tgt_br2"
00:07:00.267   06:17:17	-- nvmf/common.sh@158 -- # true
00:07:00.267   06:17:17	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:07:00.267   06:17:17	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:07:00.267   06:17:17	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:07:00.267  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:07:00.267   06:17:17	-- nvmf/common.sh@161 -- # true
00:07:00.267   06:17:17	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:07:00.267  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:07:00.267   06:17:17	-- nvmf/common.sh@162 -- # true
00:07:00.267   06:17:17	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:07:00.267   06:17:17	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:07:00.267   06:17:17	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:07:00.267   06:17:17	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:07:00.267   06:17:17	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:07:00.267   06:17:17	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:07:00.267   06:17:17	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:07:00.267   06:17:17	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:07:00.267   06:17:17	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:07:00.267   06:17:17	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:07:00.267   06:17:17	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:07:00.267   06:17:17	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:07:00.267   06:17:17	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:07:00.267   06:17:17	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:07:00.267   06:17:17	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:07:00.267   06:17:17	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:07:00.267   06:17:17	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:07:00.267   06:17:17	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:07:00.267   06:17:17	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:07:00.267   06:17:17	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:07:00.267   06:17:17	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:07:00.267   06:17:17	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:07:00.267   06:17:17	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:07:00.267   06:17:17	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:07:00.267  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:07:00.267  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms
00:07:00.267  
00:07:00.267  --- 10.0.0.2 ping statistics ---
00:07:00.267  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:07:00.267  rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms
00:07:00.267   06:17:17	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:07:00.267  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:07:00.267  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms
00:07:00.267  
00:07:00.267  --- 10.0.0.3 ping statistics ---
00:07:00.267  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:07:00.267  rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms
00:07:00.267   06:17:17	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:07:00.267  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:07:00.267  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms
00:07:00.267  
00:07:00.267  --- 10.0.0.1 ping statistics ---
00:07:00.268  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:07:00.268  rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms
00:07:00.268   06:17:17	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:07:00.268   06:17:17	-- nvmf/common.sh@421 -- # return 0
00:07:00.268   06:17:17	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:07:00.268   06:17:17	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:07:00.268   06:17:17	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:07:00.268   06:17:17	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:07:00.268   06:17:17	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:07:00.268   06:17:17	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:07:00.268   06:17:17	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:07:00.526   06:17:17	-- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0
00:07:00.526   06:17:17	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:07:00.526   06:17:17	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:00.526   06:17:17	-- common/autotest_common.sh@10 -- # set +x
00:07:00.526  ************************************
00:07:00.526  START TEST nvmf_filesystem_no_in_capsule
00:07:00.526  ************************************
00:07:00.526   06:17:17	-- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0
00:07:00.526   06:17:17	-- target/filesystem.sh@47 -- # in_capsule=0
00:07:00.526   06:17:17	-- target/filesystem.sh@49 -- # nvmfappstart -m 0xF
00:07:00.526   06:17:17	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:07:00.526   06:17:17	-- common/autotest_common.sh@722 -- # xtrace_disable
00:07:00.526   06:17:17	-- common/autotest_common.sh@10 -- # set +x
00:07:00.526   06:17:17	-- nvmf/common.sh@469 -- # nvmfpid=60455
00:07:00.526   06:17:17	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:07:00.526   06:17:17	-- nvmf/common.sh@470 -- # waitforlisten 60455
00:07:00.526   06:17:17	-- common/autotest_common.sh@829 -- # '[' -z 60455 ']'
00:07:00.526   06:17:17	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:00.526   06:17:17	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:00.526   06:17:17	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:00.526  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:00.526   06:17:17	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:00.526   06:17:17	-- common/autotest_common.sh@10 -- # set +x
00:07:00.526  [2024-12-16 06:17:17.333545] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:00.526  [2024-12-16 06:17:17.333632] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:07:00.526  [2024-12-16 06:17:17.471699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:07:00.786  [2024-12-16 06:17:17.557569] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:00.786  [2024-12-16 06:17:17.557699] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:07:00.786  [2024-12-16 06:17:17.557710] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:07:00.786  [2024-12-16 06:17:17.557717] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:07:00.786  [2024-12-16 06:17:17.557860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:07:00.786  [2024-12-16 06:17:17.558191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:07:00.786  [2024-12-16 06:17:17.558729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:07:00.786  [2024-12-16 06:17:17.558737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:01.353   06:17:18	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:01.353   06:17:18	-- common/autotest_common.sh@862 -- # return 0
00:07:01.353   06:17:18	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:07:01.353   06:17:18	-- common/autotest_common.sh@728 -- # xtrace_disable
00:07:01.353   06:17:18	-- common/autotest_common.sh@10 -- # set +x
00:07:01.611   06:17:18	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:07:01.611   06:17:18	-- target/filesystem.sh@50 -- # malloc_name=Malloc1
00:07:01.611   06:17:18	-- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0
00:07:01.611   06:17:18	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:01.611   06:17:18	-- common/autotest_common.sh@10 -- # set +x
00:07:01.611  [2024-12-16 06:17:18.348422] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:07:01.612   06:17:18	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:01.612   06:17:18	-- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1
00:07:01.612   06:17:18	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:01.612   06:17:18	-- common/autotest_common.sh@10 -- # set +x
00:07:01.612  Malloc1
00:07:01.612   06:17:18	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:01.612   06:17:18	-- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:07:01.612   06:17:18	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:01.612   06:17:18	-- common/autotest_common.sh@10 -- # set +x
00:07:01.612   06:17:18	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:01.612   06:17:18	-- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:07:01.612   06:17:18	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:01.612   06:17:18	-- common/autotest_common.sh@10 -- # set +x
00:07:01.612   06:17:18	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:01.612   06:17:18	-- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:07:01.612   06:17:18	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:01.612   06:17:18	-- common/autotest_common.sh@10 -- # set +x
00:07:01.612  [2024-12-16 06:17:18.529076] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:07:01.612   06:17:18	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:01.612    06:17:18	-- target/filesystem.sh@58 -- # get_bdev_size Malloc1
00:07:01.612    06:17:18	-- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1
00:07:01.612    06:17:18	-- common/autotest_common.sh@1368 -- # local bdev_info
00:07:01.612    06:17:18	-- common/autotest_common.sh@1369 -- # local bs
00:07:01.612    06:17:18	-- common/autotest_common.sh@1370 -- # local nb
00:07:01.612     06:17:18	-- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1
00:07:01.612     06:17:18	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:01.612     06:17:18	-- common/autotest_common.sh@10 -- # set +x
00:07:01.612     06:17:18	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:01.612    06:17:18	-- common/autotest_common.sh@1371 -- # bdev_info='[
00:07:01.612  {
00:07:01.612  "aliases": [
00:07:01.612  "fd52489a-5c47-4fb5-ad3d-0af784fe9c82"
00:07:01.612  ],
00:07:01.612  "assigned_rate_limits": {
00:07:01.612  "r_mbytes_per_sec": 0,
00:07:01.612  "rw_ios_per_sec": 0,
00:07:01.612  "rw_mbytes_per_sec": 0,
00:07:01.612  "w_mbytes_per_sec": 0
00:07:01.612  },
00:07:01.612  "block_size": 512,
00:07:01.612  "claim_type": "exclusive_write",
00:07:01.612  "claimed": true,
00:07:01.612  "driver_specific": {},
00:07:01.612  "memory_domains": [
00:07:01.612  {
00:07:01.612  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:07:01.612  "dma_device_type": 2
00:07:01.612  }
00:07:01.612  ],
00:07:01.612  "name": "Malloc1",
00:07:01.612  "num_blocks": 1048576,
00:07:01.612  "product_name": "Malloc disk",
00:07:01.612  "supported_io_types": {
00:07:01.612  "abort": true,
00:07:01.612  "compare": false,
00:07:01.612  "compare_and_write": false,
00:07:01.612  "flush": true,
00:07:01.612  "nvme_admin": false,
00:07:01.612  "nvme_io": false,
00:07:01.612  "read": true,
00:07:01.612  "reset": true,
00:07:01.612  "unmap": true,
00:07:01.612  "write": true,
00:07:01.612  "write_zeroes": true
00:07:01.612  },
00:07:01.612  "uuid": "fd52489a-5c47-4fb5-ad3d-0af784fe9c82",
00:07:01.612  "zoned": false
00:07:01.612  }
00:07:01.612  ]'
00:07:01.612     06:17:18	-- common/autotest_common.sh@1372 -- # jq '.[] .block_size'
00:07:01.870    06:17:18	-- common/autotest_common.sh@1372 -- # bs=512
00:07:01.870     06:17:18	-- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks'
00:07:01.870    06:17:18	-- common/autotest_common.sh@1373 -- # nb=1048576
00:07:01.870    06:17:18	-- common/autotest_common.sh@1376 -- # bdev_size=512
00:07:01.870    06:17:18	-- common/autotest_common.sh@1377 -- # echo 512
00:07:01.870   06:17:18	-- target/filesystem.sh@58 -- # malloc_size=536870912
00:07:01.870   06:17:18	-- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:07:01.870   06:17:18	-- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME
00:07:01.870   06:17:18	-- common/autotest_common.sh@1187 -- # local i=0
00:07:02.129   06:17:18	-- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0
00:07:02.129   06:17:18	-- common/autotest_common.sh@1189 -- # [[ -n '' ]]
00:07:02.129   06:17:18	-- common/autotest_common.sh@1194 -- # sleep 2
00:07:04.030   06:17:20	-- common/autotest_common.sh@1195 -- # (( i++ <= 15 ))
00:07:04.030    06:17:20	-- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL
00:07:04.030    06:17:20	-- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME
00:07:04.030   06:17:20	-- common/autotest_common.sh@1196 -- # nvme_devices=1
00:07:04.030   06:17:20	-- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter ))
00:07:04.030   06:17:20	-- common/autotest_common.sh@1197 -- # return 0
00:07:04.030    06:17:20	-- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL
00:07:04.030    06:17:20	-- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)'
00:07:04.030   06:17:20	-- target/filesystem.sh@63 -- # nvme_name=nvme0n1
00:07:04.030    06:17:20	-- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1
00:07:04.030    06:17:20	-- setup/common.sh@76 -- # local dev=nvme0n1
00:07:04.030    06:17:20	-- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]]
00:07:04.030    06:17:20	-- setup/common.sh@80 -- # echo 536870912
00:07:04.030   06:17:20	-- target/filesystem.sh@64 -- # nvme_size=536870912
00:07:04.030   06:17:20	-- target/filesystem.sh@66 -- # mkdir -p /mnt/device
00:07:04.030   06:17:20	-- target/filesystem.sh@67 -- # (( nvme_size == malloc_size ))
00:07:04.030   06:17:20	-- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100%
00:07:04.030   06:17:20	-- target/filesystem.sh@69 -- # partprobe
00:07:04.289   06:17:21	-- target/filesystem.sh@70 -- # sleep 1
00:07:05.241   06:17:22	-- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']'
00:07:05.241   06:17:22	-- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1
00:07:05.241   06:17:22	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:07:05.241   06:17:22	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:05.241   06:17:22	-- common/autotest_common.sh@10 -- # set +x
00:07:05.241  ************************************
00:07:05.241  START TEST filesystem_ext4
00:07:05.241  ************************************
00:07:05.241   06:17:22	-- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1
00:07:05.241   06:17:22	-- target/filesystem.sh@18 -- # fstype=ext4
00:07:05.241   06:17:22	-- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:07:05.241   06:17:22	-- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1
00:07:05.241   06:17:22	-- common/autotest_common.sh@912 -- # local fstype=ext4
00:07:05.241   06:17:22	-- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1
00:07:05.241   06:17:22	-- common/autotest_common.sh@914 -- # local i=0
00:07:05.241   06:17:22	-- common/autotest_common.sh@915 -- # local force
00:07:05.241   06:17:22	-- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']'
00:07:05.241   06:17:22	-- common/autotest_common.sh@918 -- # force=-F
00:07:05.241   06:17:22	-- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1
00:07:05.241  mke2fs 1.47.0 (5-Feb-2023)
00:07:05.241  Discarding device blocks:      0/522240             done                            
00:07:05.241  Creating filesystem with 522240 1k blocks and 130560 inodes
00:07:05.241  Filesystem UUID: bef96bf1-fa46-451f-b18c-b8454bcc3c10
00:07:05.241  Superblock backups stored on blocks: 
00:07:05.241  	8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409
00:07:05.241  
00:07:05.241  Allocating group tables:  0/64     done                            
00:07:05.241  Writing inode tables:  0/64     done                            
00:07:05.241  Creating journal (8192 blocks): done
00:07:05.500  Writing superblocks and filesystem accounting information:  0/64     done
00:07:05.500  
00:07:05.500   06:17:22	-- common/autotest_common.sh@931 -- # return 0
00:07:05.500   06:17:22	-- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:07:10.771   06:17:27	-- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:07:10.771   06:17:27	-- target/filesystem.sh@25 -- # sync
00:07:10.771   06:17:27	-- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:07:10.771   06:17:27	-- target/filesystem.sh@27 -- # sync
00:07:10.771   06:17:27	-- target/filesystem.sh@29 -- # i=0
00:07:10.771   06:17:27	-- target/filesystem.sh@30 -- # umount /mnt/device
00:07:10.771   06:17:27	-- target/filesystem.sh@37 -- # kill -0 60455
00:07:10.771   06:17:27	-- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:07:10.771   06:17:27	-- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:07:10.771   06:17:27	-- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:07:10.771   06:17:27	-- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:07:10.771  
00:07:10.771  real	0m5.660s
00:07:10.771  user	0m0.020s
00:07:10.771  sys	0m0.066s
00:07:10.771   06:17:27	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:10.771   06:17:27	-- common/autotest_common.sh@10 -- # set +x
00:07:10.771  ************************************
00:07:10.771  END TEST filesystem_ext4
00:07:10.771  ************************************
00:07:11.029   06:17:27	-- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1
00:07:11.029   06:17:27	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:07:11.029   06:17:27	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:11.029   06:17:27	-- common/autotest_common.sh@10 -- # set +x
00:07:11.029  ************************************
00:07:11.029  START TEST filesystem_btrfs
00:07:11.029  ************************************
00:07:11.029   06:17:27	-- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1
00:07:11.029   06:17:27	-- target/filesystem.sh@18 -- # fstype=btrfs
00:07:11.029   06:17:27	-- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:07:11.029   06:17:27	-- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1
00:07:11.029   06:17:27	-- common/autotest_common.sh@912 -- # local fstype=btrfs
00:07:11.029   06:17:27	-- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1
00:07:11.029   06:17:27	-- common/autotest_common.sh@914 -- # local i=0
00:07:11.029   06:17:27	-- common/autotest_common.sh@915 -- # local force
00:07:11.029   06:17:27	-- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']'
00:07:11.029   06:17:27	-- common/autotest_common.sh@920 -- # force=-f
00:07:11.030   06:17:27	-- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1
00:07:11.030  btrfs-progs v6.8.1
00:07:11.030  See https://btrfs.readthedocs.io for more information.
00:07:11.030  
00:07:11.030  Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ...
00:07:11.030  NOTE: several default settings have changed in version 5.15, please make sure
00:07:11.030        this does not affect your deployments:
00:07:11.030        - DUP for metadata (-m dup)
00:07:11.030        - enabled no-holes (-O no-holes)
00:07:11.030        - enabled free-space-tree (-R free-space-tree)
00:07:11.030  
00:07:11.030  Label:              (null)
00:07:11.030  UUID:               8f4a9e9e-f9fb-472d-80f5-caf0cb3637b3
00:07:11.030  Node size:          16384
00:07:11.030  Sector size:        4096	(CPU page size: 4096)
00:07:11.030  Filesystem size:    510.00MiB
00:07:11.030  Block group profiles:
00:07:11.030    Data:             single            8.00MiB
00:07:11.030    Metadata:         DUP              32.00MiB
00:07:11.030    System:           DUP               8.00MiB
00:07:11.030  SSD detected:       yes
00:07:11.030  Zoned device:       no
00:07:11.030  Features:           extref, skinny-metadata, no-holes, free-space-tree
00:07:11.030  Checksum:           crc32c
00:07:11.030  Number of devices:  1
00:07:11.030  Devices:
00:07:11.030     ID        SIZE  PATH          
00:07:11.030      1   510.00MiB  /dev/nvme0n1p1
00:07:11.030  
00:07:11.030   06:17:27	-- common/autotest_common.sh@931 -- # return 0
00:07:11.030   06:17:27	-- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:07:11.030   06:17:27	-- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:07:11.030   06:17:27	-- target/filesystem.sh@25 -- # sync
00:07:11.030   06:17:27	-- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:07:11.030   06:17:27	-- target/filesystem.sh@27 -- # sync
00:07:11.030   06:17:27	-- target/filesystem.sh@29 -- # i=0
00:07:11.030   06:17:27	-- target/filesystem.sh@30 -- # umount /mnt/device
00:07:11.030   06:17:27	-- target/filesystem.sh@37 -- # kill -0 60455
00:07:11.030   06:17:27	-- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:07:11.030   06:17:27	-- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:07:11.289   06:17:28	-- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:07:11.289   06:17:28	-- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:07:11.289  ************************************
00:07:11.289  END TEST filesystem_btrfs
00:07:11.289  ************************************
00:07:11.289  
00:07:11.289  real	0m0.223s
00:07:11.289  user	0m0.017s
00:07:11.289  sys	0m0.059s
00:07:11.289   06:17:28	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:11.289   06:17:28	-- common/autotest_common.sh@10 -- # set +x
00:07:11.289   06:17:28	-- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1
00:07:11.289   06:17:28	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:07:11.289   06:17:28	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:11.289   06:17:28	-- common/autotest_common.sh@10 -- # set +x
00:07:11.289  ************************************
00:07:11.289  START TEST filesystem_xfs
00:07:11.289  ************************************
00:07:11.289   06:17:28	-- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1
00:07:11.289   06:17:28	-- target/filesystem.sh@18 -- # fstype=xfs
00:07:11.289   06:17:28	-- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:07:11.289   06:17:28	-- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1
00:07:11.289   06:17:28	-- common/autotest_common.sh@912 -- # local fstype=xfs
00:07:11.289   06:17:28	-- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1
00:07:11.289   06:17:28	-- common/autotest_common.sh@914 -- # local i=0
00:07:11.289   06:17:28	-- common/autotest_common.sh@915 -- # local force
00:07:11.289   06:17:28	-- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']'
00:07:11.289   06:17:28	-- common/autotest_common.sh@920 -- # force=-f
00:07:11.289   06:17:28	-- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1
00:07:11.289  meta-data=/dev/nvme0n1p1         isize=512    agcount=4, agsize=32640 blks
00:07:11.289           =                       sectsz=512   attr=2, projid32bit=1
00:07:11.289           =                       crc=1        finobt=1, sparse=1, rmapbt=0
00:07:11.289           =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
00:07:11.289  data     =                       bsize=4096   blocks=130560, imaxpct=25
00:07:11.289           =                       sunit=0      swidth=0 blks
00:07:11.289  naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
00:07:11.289  log      =internal log           bsize=4096   blocks=16384, version=2
00:07:11.289           =                       sectsz=512   sunit=0 blks, lazy-count=1
00:07:11.289  realtime =none                   extsz=4096   blocks=0, rtextents=0
00:07:12.226  Discarding blocks...Done.
00:07:12.226   06:17:28	-- common/autotest_common.sh@931 -- # return 0
00:07:12.226   06:17:28	-- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:07:14.758   06:17:31	-- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:07:14.758   06:17:31	-- target/filesystem.sh@25 -- # sync
00:07:14.758   06:17:31	-- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:07:14.758   06:17:31	-- target/filesystem.sh@27 -- # sync
00:07:14.758   06:17:31	-- target/filesystem.sh@29 -- # i=0
00:07:14.758   06:17:31	-- target/filesystem.sh@30 -- # umount /mnt/device
00:07:14.758   06:17:31	-- target/filesystem.sh@37 -- # kill -0 60455
00:07:14.758   06:17:31	-- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:07:14.758   06:17:31	-- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:07:14.758   06:17:31	-- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:07:14.758   06:17:31	-- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:07:14.759  ************************************
00:07:14.759  END TEST filesystem_xfs
00:07:14.759  ************************************
00:07:14.759  
00:07:14.759  real	0m3.186s
00:07:14.759  user	0m0.024s
00:07:14.759  sys	0m0.060s
00:07:14.759   06:17:31	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:14.759   06:17:31	-- common/autotest_common.sh@10 -- # set +x
00:07:14.759   06:17:31	-- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1
00:07:14.759   06:17:31	-- target/filesystem.sh@93 -- # sync
00:07:14.759   06:17:31	-- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:07:14.759  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:07:14.759   06:17:31	-- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:07:14.759   06:17:31	-- common/autotest_common.sh@1208 -- # local i=0
00:07:14.759   06:17:31	-- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL
00:07:14.759   06:17:31	-- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME
00:07:14.759   06:17:31	-- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL
00:07:14.759   06:17:31	-- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME
00:07:14.759   06:17:31	-- common/autotest_common.sh@1220 -- # return 0
00:07:14.759   06:17:31	-- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:07:14.759   06:17:31	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:14.759   06:17:31	-- common/autotest_common.sh@10 -- # set +x
00:07:14.759   06:17:31	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:14.759   06:17:31	-- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT
00:07:14.759   06:17:31	-- target/filesystem.sh@101 -- # killprocess 60455
00:07:14.759   06:17:31	-- common/autotest_common.sh@936 -- # '[' -z 60455 ']'
00:07:14.759   06:17:31	-- common/autotest_common.sh@940 -- # kill -0 60455
00:07:14.759    06:17:31	-- common/autotest_common.sh@941 -- # uname
00:07:14.759   06:17:31	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:07:14.759    06:17:31	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60455
00:07:14.759  killing process with pid 60455
00:07:14.759   06:17:31	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:07:14.759   06:17:31	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:07:14.759   06:17:31	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 60455'
00:07:14.759   06:17:31	-- common/autotest_common.sh@955 -- # kill 60455
00:07:14.759   06:17:31	-- common/autotest_common.sh@960 -- # wait 60455
00:07:15.018   06:17:31	-- target/filesystem.sh@102 -- # nvmfpid=
00:07:15.018  
00:07:15.018  real	0m14.598s
00:07:15.018  user	0m55.907s
00:07:15.018  sys	0m2.004s
00:07:15.018   06:17:31	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:15.018   06:17:31	-- common/autotest_common.sh@10 -- # set +x
00:07:15.018  ************************************
00:07:15.018  END TEST nvmf_filesystem_no_in_capsule
00:07:15.018  ************************************
00:07:15.018   06:17:31	-- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096
00:07:15.018   06:17:31	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:07:15.018   06:17:31	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:15.018   06:17:31	-- common/autotest_common.sh@10 -- # set +x
00:07:15.018  ************************************
00:07:15.018  START TEST nvmf_filesystem_in_capsule
00:07:15.018  ************************************
00:07:15.018   06:17:31	-- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096
00:07:15.018   06:17:31	-- target/filesystem.sh@47 -- # in_capsule=4096
00:07:15.018   06:17:31	-- target/filesystem.sh@49 -- # nvmfappstart -m 0xF
00:07:15.018   06:17:31	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:07:15.018   06:17:31	-- common/autotest_common.sh@722 -- # xtrace_disable
00:07:15.018   06:17:31	-- common/autotest_common.sh@10 -- # set +x
00:07:15.018   06:17:31	-- nvmf/common.sh@469 -- # nvmfpid=60828
00:07:15.018   06:17:31	-- nvmf/common.sh@470 -- # waitforlisten 60828
00:07:15.018   06:17:31	-- common/autotest_common.sh@829 -- # '[' -z 60828 ']'
00:07:15.018   06:17:31	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:15.018   06:17:31	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:15.018   06:17:31	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:07:15.018  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:15.018   06:17:31	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:15.018   06:17:31	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:15.018   06:17:31	-- common/autotest_common.sh@10 -- # set +x
00:07:15.018  [2024-12-16 06:17:31.988163] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:15.018  [2024-12-16 06:17:31.988280] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:07:15.277  [2024-12-16 06:17:32.131032] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:07:15.277  [2024-12-16 06:17:32.202833] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:15.277  [2024-12-16 06:17:32.203003] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:07:15.277  [2024-12-16 06:17:32.203032] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:07:15.277  [2024-12-16 06:17:32.203040] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:07:15.277  [2024-12-16 06:17:32.203176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:07:15.277  [2024-12-16 06:17:32.203344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:07:15.277  [2024-12-16 06:17:32.203748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:07:15.277  [2024-12-16 06:17:32.203754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:16.213   06:17:32	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:16.213   06:17:32	-- common/autotest_common.sh@862 -- # return 0
00:07:16.213   06:17:32	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:07:16.213   06:17:32	-- common/autotest_common.sh@728 -- # xtrace_disable
00:07:16.213   06:17:32	-- common/autotest_common.sh@10 -- # set +x
00:07:16.213   06:17:32	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:07:16.213   06:17:32	-- target/filesystem.sh@50 -- # malloc_name=Malloc1
00:07:16.213   06:17:32	-- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096
00:07:16.213   06:17:32	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:16.213   06:17:32	-- common/autotest_common.sh@10 -- # set +x
00:07:16.213  [2024-12-16 06:17:32.947296] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:07:16.213   06:17:32	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:16.214   06:17:32	-- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1
00:07:16.214   06:17:32	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:16.214   06:17:32	-- common/autotest_common.sh@10 -- # set +x
00:07:16.214  Malloc1
00:07:16.214   06:17:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:16.214   06:17:33	-- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:07:16.214   06:17:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:16.214   06:17:33	-- common/autotest_common.sh@10 -- # set +x
00:07:16.214   06:17:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:16.214   06:17:33	-- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:07:16.214   06:17:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:16.214   06:17:33	-- common/autotest_common.sh@10 -- # set +x
00:07:16.214   06:17:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:16.214   06:17:33	-- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:07:16.214   06:17:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:16.214   06:17:33	-- common/autotest_common.sh@10 -- # set +x
00:07:16.214  [2024-12-16 06:17:33.127639] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:07:16.214   06:17:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:16.214    06:17:33	-- target/filesystem.sh@58 -- # get_bdev_size Malloc1
00:07:16.214    06:17:33	-- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1
00:07:16.214    06:17:33	-- common/autotest_common.sh@1368 -- # local bdev_info
00:07:16.214    06:17:33	-- common/autotest_common.sh@1369 -- # local bs
00:07:16.214    06:17:33	-- common/autotest_common.sh@1370 -- # local nb
00:07:16.214     06:17:33	-- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1
00:07:16.214     06:17:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:16.214     06:17:33	-- common/autotest_common.sh@10 -- # set +x
00:07:16.214     06:17:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:16.214    06:17:33	-- common/autotest_common.sh@1371 -- # bdev_info='[
00:07:16.214  {
00:07:16.214  "aliases": [
00:07:16.214  "5f6b48c8-e4d8-454a-a459-f6dcfdca9caa"
00:07:16.214  ],
00:07:16.214  "assigned_rate_limits": {
00:07:16.214  "r_mbytes_per_sec": 0,
00:07:16.214  "rw_ios_per_sec": 0,
00:07:16.214  "rw_mbytes_per_sec": 0,
00:07:16.214  "w_mbytes_per_sec": 0
00:07:16.214  },
00:07:16.214  "block_size": 512,
00:07:16.214  "claim_type": "exclusive_write",
00:07:16.214  "claimed": true,
00:07:16.214  "driver_specific": {},
00:07:16.214  "memory_domains": [
00:07:16.214  {
00:07:16.214  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:07:16.214  "dma_device_type": 2
00:07:16.214  }
00:07:16.214  ],
00:07:16.214  "name": "Malloc1",
00:07:16.214  "num_blocks": 1048576,
00:07:16.214  "product_name": "Malloc disk",
00:07:16.214  "supported_io_types": {
00:07:16.214  "abort": true,
00:07:16.214  "compare": false,
00:07:16.214  "compare_and_write": false,
00:07:16.214  "flush": true,
00:07:16.214  "nvme_admin": false,
00:07:16.214  "nvme_io": false,
00:07:16.214  "read": true,
00:07:16.214  "reset": true,
00:07:16.214  "unmap": true,
00:07:16.214  "write": true,
00:07:16.214  "write_zeroes": true
00:07:16.214  },
00:07:16.214  "uuid": "5f6b48c8-e4d8-454a-a459-f6dcfdca9caa",
00:07:16.214  "zoned": false
00:07:16.214  }
00:07:16.214  ]'
00:07:16.214     06:17:33	-- common/autotest_common.sh@1372 -- # jq '.[] .block_size'
00:07:16.473    06:17:33	-- common/autotest_common.sh@1372 -- # bs=512
00:07:16.473     06:17:33	-- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks'
00:07:16.473    06:17:33	-- common/autotest_common.sh@1373 -- # nb=1048576
00:07:16.473    06:17:33	-- common/autotest_common.sh@1376 -- # bdev_size=512
00:07:16.473    06:17:33	-- common/autotest_common.sh@1377 -- # echo 512
00:07:16.473   06:17:33	-- target/filesystem.sh@58 -- # malloc_size=536870912
00:07:16.473   06:17:33	-- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:07:16.473   06:17:33	-- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME
00:07:16.473   06:17:33	-- common/autotest_common.sh@1187 -- # local i=0
00:07:16.473   06:17:33	-- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0
00:07:16.473   06:17:33	-- common/autotest_common.sh@1189 -- # [[ -n '' ]]
00:07:16.473   06:17:33	-- common/autotest_common.sh@1194 -- # sleep 2
00:07:19.046   06:17:35	-- common/autotest_common.sh@1195 -- # (( i++ <= 15 ))
00:07:19.046    06:17:35	-- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL
00:07:19.046    06:17:35	-- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME
00:07:19.046   06:17:35	-- common/autotest_common.sh@1196 -- # nvme_devices=1
00:07:19.046   06:17:35	-- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter ))
00:07:19.046   06:17:35	-- common/autotest_common.sh@1197 -- # return 0
00:07:19.046    06:17:35	-- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL
00:07:19.046    06:17:35	-- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)'
00:07:19.046   06:17:35	-- target/filesystem.sh@63 -- # nvme_name=nvme0n1
00:07:19.046    06:17:35	-- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1
00:07:19.046    06:17:35	-- setup/common.sh@76 -- # local dev=nvme0n1
00:07:19.046    06:17:35	-- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]]
00:07:19.046    06:17:35	-- setup/common.sh@80 -- # echo 536870912
00:07:19.046   06:17:35	-- target/filesystem.sh@64 -- # nvme_size=536870912
00:07:19.046   06:17:35	-- target/filesystem.sh@66 -- # mkdir -p /mnt/device
00:07:19.046   06:17:35	-- target/filesystem.sh@67 -- # (( nvme_size == malloc_size ))
00:07:19.046   06:17:35	-- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100%
00:07:19.046   06:17:35	-- target/filesystem.sh@69 -- # partprobe
00:07:19.046   06:17:35	-- target/filesystem.sh@70 -- # sleep 1
00:07:19.615   06:17:36	-- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']'
00:07:19.615   06:17:36	-- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1
00:07:19.615   06:17:36	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:07:19.873   06:17:36	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:19.873   06:17:36	-- common/autotest_common.sh@10 -- # set +x
00:07:19.873  ************************************
00:07:19.873  START TEST filesystem_in_capsule_ext4
00:07:19.873  ************************************
00:07:19.873   06:17:36	-- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1
00:07:19.873   06:17:36	-- target/filesystem.sh@18 -- # fstype=ext4
00:07:19.873   06:17:36	-- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:07:19.873   06:17:36	-- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1
00:07:19.873   06:17:36	-- common/autotest_common.sh@912 -- # local fstype=ext4
00:07:19.873   06:17:36	-- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1
00:07:19.873   06:17:36	-- common/autotest_common.sh@914 -- # local i=0
00:07:19.873   06:17:36	-- common/autotest_common.sh@915 -- # local force
00:07:19.873   06:17:36	-- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']'
00:07:19.873   06:17:36	-- common/autotest_common.sh@918 -- # force=-F
00:07:19.873   06:17:36	-- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1
00:07:19.873  mke2fs 1.47.0 (5-Feb-2023)
00:07:19.873  Discarding device blocks:      0/522240             done                            
00:07:19.873  Creating filesystem with 522240 1k blocks and 130560 inodes
00:07:19.873  Filesystem UUID: a75fe5b7-323d-4961-9ec6-42d485b1cbc7
00:07:19.873  Superblock backups stored on blocks: 
00:07:19.873  	8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409
00:07:19.874  
00:07:19.874  Allocating group tables:  0/64     done                            
00:07:19.874  Writing inode tables:  0/64     done                            
00:07:19.874  Creating journal (8192 blocks): done
00:07:19.874  Writing superblocks and filesystem accounting information:  0/64     done
00:07:19.874  
00:07:19.874   06:17:36	-- common/autotest_common.sh@931 -- # return 0
00:07:19.874   06:17:36	-- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:07:25.142   06:17:42	-- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:07:25.401   06:17:42	-- target/filesystem.sh@25 -- # sync
00:07:25.401   06:17:42	-- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:07:25.401   06:17:42	-- target/filesystem.sh@27 -- # sync
00:07:25.401   06:17:42	-- target/filesystem.sh@29 -- # i=0
00:07:25.401   06:17:42	-- target/filesystem.sh@30 -- # umount /mnt/device
00:07:25.401   06:17:42	-- target/filesystem.sh@37 -- # kill -0 60828
00:07:25.401   06:17:42	-- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:07:25.401   06:17:42	-- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:07:25.401   06:17:42	-- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:07:25.401   06:17:42	-- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:07:25.401  
00:07:25.401  real	0m5.615s
00:07:25.401  user	0m0.031s
00:07:25.401  sys	0m0.052s
00:07:25.401   06:17:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:25.401   06:17:42	-- common/autotest_common.sh@10 -- # set +x
00:07:25.401  ************************************
00:07:25.401  END TEST filesystem_in_capsule_ext4
00:07:25.401  ************************************
00:07:25.401   06:17:42	-- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1
00:07:25.401   06:17:42	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:07:25.401   06:17:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:25.401   06:17:42	-- common/autotest_common.sh@10 -- # set +x
00:07:25.401  ************************************
00:07:25.401  START TEST filesystem_in_capsule_btrfs
00:07:25.401  ************************************
00:07:25.401   06:17:42	-- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1
00:07:25.401   06:17:42	-- target/filesystem.sh@18 -- # fstype=btrfs
00:07:25.401   06:17:42	-- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:07:25.401   06:17:42	-- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1
00:07:25.401   06:17:42	-- common/autotest_common.sh@912 -- # local fstype=btrfs
00:07:25.401   06:17:42	-- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1
00:07:25.401   06:17:42	-- common/autotest_common.sh@914 -- # local i=0
00:07:25.401   06:17:42	-- common/autotest_common.sh@915 -- # local force
00:07:25.401   06:17:42	-- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']'
00:07:25.401   06:17:42	-- common/autotest_common.sh@920 -- # force=-f
00:07:25.401   06:17:42	-- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1
00:07:25.660  btrfs-progs v6.8.1
00:07:25.660  See https://btrfs.readthedocs.io for more information.
00:07:25.660  
00:07:25.660  Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ...
00:07:25.660  NOTE: several default settings have changed in version 5.15, please make sure
00:07:25.660        this does not affect your deployments:
00:07:25.660        - DUP for metadata (-m dup)
00:07:25.660        - enabled no-holes (-O no-holes)
00:07:25.660        - enabled free-space-tree (-R free-space-tree)
00:07:25.660  
00:07:25.660  Label:              (null)
00:07:25.660  UUID:               9176087c-188d-4cbc-84cb-f47656efccc8
00:07:25.660  Node size:          16384
00:07:25.660  Sector size:        4096	(CPU page size: 4096)
00:07:25.660  Filesystem size:    510.00MiB
00:07:25.660  Block group profiles:
00:07:25.660    Data:             single            8.00MiB
00:07:25.660    Metadata:         DUP              32.00MiB
00:07:25.660    System:           DUP               8.00MiB
00:07:25.660  SSD detected:       yes
00:07:25.660  Zoned device:       no
00:07:25.660  Features:           extref, skinny-metadata, no-holes, free-space-tree
00:07:25.660  Checksum:           crc32c
00:07:25.660  Number of devices:  1
00:07:25.660  Devices:
00:07:25.660     ID        SIZE  PATH          
00:07:25.660      1   510.00MiB  /dev/nvme0n1p1
00:07:25.660  
00:07:25.660   06:17:42	-- common/autotest_common.sh@931 -- # return 0
00:07:25.660   06:17:42	-- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:07:25.660   06:17:42	-- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:07:25.660   06:17:42	-- target/filesystem.sh@25 -- # sync
00:07:25.660   06:17:42	-- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:07:25.660   06:17:42	-- target/filesystem.sh@27 -- # sync
00:07:25.660   06:17:42	-- target/filesystem.sh@29 -- # i=0
00:07:25.660   06:17:42	-- target/filesystem.sh@30 -- # umount /mnt/device
00:07:25.660   06:17:42	-- target/filesystem.sh@37 -- # kill -0 60828
00:07:25.660   06:17:42	-- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:07:25.660   06:17:42	-- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:07:25.660   06:17:42	-- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:07:25.660   06:17:42	-- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:07:25.660  
00:07:25.660  real	0m0.224s
00:07:25.660  user	0m0.021s
00:07:25.660  sys	0m0.061s
00:07:25.660   06:17:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:25.660   06:17:42	-- common/autotest_common.sh@10 -- # set +x
00:07:25.660  ************************************
00:07:25.660  END TEST filesystem_in_capsule_btrfs
00:07:25.660  ************************************
00:07:25.660   06:17:42	-- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1
00:07:25.660   06:17:42	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:07:25.660   06:17:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:25.660   06:17:42	-- common/autotest_common.sh@10 -- # set +x
00:07:25.660  ************************************
00:07:25.660  START TEST filesystem_in_capsule_xfs
00:07:25.660  ************************************
00:07:25.660   06:17:42	-- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1
00:07:25.660   06:17:42	-- target/filesystem.sh@18 -- # fstype=xfs
00:07:25.660   06:17:42	-- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:07:25.660   06:17:42	-- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1
00:07:25.660   06:17:42	-- common/autotest_common.sh@912 -- # local fstype=xfs
00:07:25.660   06:17:42	-- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1
00:07:25.660   06:17:42	-- common/autotest_common.sh@914 -- # local i=0
00:07:25.660   06:17:42	-- common/autotest_common.sh@915 -- # local force
00:07:25.660   06:17:42	-- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']'
00:07:25.660   06:17:42	-- common/autotest_common.sh@920 -- # force=-f
00:07:25.660   06:17:42	-- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1
00:07:25.919  meta-data=/dev/nvme0n1p1         isize=512    agcount=4, agsize=32640 blks
00:07:25.919           =                       sectsz=512   attr=2, projid32bit=1
00:07:25.920           =                       crc=1        finobt=1, sparse=1, rmapbt=0
00:07:25.920           =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
00:07:25.920  data     =                       bsize=4096   blocks=130560, imaxpct=25
00:07:25.920           =                       sunit=0      swidth=0 blks
00:07:25.920  naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
00:07:25.920  log      =internal log           bsize=4096   blocks=16384, version=2
00:07:25.920           =                       sectsz=512   sunit=0 blks, lazy-count=1
00:07:25.920  realtime =none                   extsz=4096   blocks=0, rtextents=0
00:07:26.487  Discarding blocks...Done.
00:07:26.487   06:17:43	-- common/autotest_common.sh@931 -- # return 0
00:07:26.487   06:17:43	-- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:07:28.389   06:17:45	-- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:07:28.389   06:17:45	-- target/filesystem.sh@25 -- # sync
00:07:28.389   06:17:45	-- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:07:28.389   06:17:45	-- target/filesystem.sh@27 -- # sync
00:07:28.389   06:17:45	-- target/filesystem.sh@29 -- # i=0
00:07:28.389   06:17:45	-- target/filesystem.sh@30 -- # umount /mnt/device
00:07:28.389   06:17:45	-- target/filesystem.sh@37 -- # kill -0 60828
00:07:28.389   06:17:45	-- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:07:28.389   06:17:45	-- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:07:28.389   06:17:45	-- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:07:28.389   06:17:45	-- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:07:28.389  
00:07:28.389  real	0m2.700s
00:07:28.389  user	0m0.023s
00:07:28.389  sys	0m0.055s
00:07:28.389   06:17:45	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:28.389   06:17:45	-- common/autotest_common.sh@10 -- # set +x
00:07:28.389  ************************************
00:07:28.389  END TEST filesystem_in_capsule_xfs
00:07:28.389  ************************************
00:07:28.389   06:17:45	-- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1
00:07:28.389   06:17:45	-- target/filesystem.sh@93 -- # sync
00:07:28.389   06:17:45	-- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:07:28.389  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:07:28.389   06:17:45	-- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:07:28.389   06:17:45	-- common/autotest_common.sh@1208 -- # local i=0
00:07:28.647   06:17:45	-- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME
00:07:28.647   06:17:45	-- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL
00:07:28.647   06:17:45	-- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL
00:07:28.647   06:17:45	-- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME
00:07:28.647   06:17:45	-- common/autotest_common.sh@1220 -- # return 0
00:07:28.647   06:17:45	-- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:07:28.648   06:17:45	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:28.648   06:17:45	-- common/autotest_common.sh@10 -- # set +x
00:07:28.648   06:17:45	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:28.648   06:17:45	-- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT
00:07:28.648   06:17:45	-- target/filesystem.sh@101 -- # killprocess 60828
00:07:28.648   06:17:45	-- common/autotest_common.sh@936 -- # '[' -z 60828 ']'
00:07:28.648   06:17:45	-- common/autotest_common.sh@940 -- # kill -0 60828
00:07:28.648    06:17:45	-- common/autotest_common.sh@941 -- # uname
00:07:28.648   06:17:45	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:07:28.648    06:17:45	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60828
00:07:28.648   06:17:45	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:07:28.648   06:17:45	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:07:28.648  killing process with pid 60828
00:07:28.648   06:17:45	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 60828'
00:07:28.648   06:17:45	-- common/autotest_common.sh@955 -- # kill 60828
00:07:28.648   06:17:45	-- common/autotest_common.sh@960 -- # wait 60828
00:07:28.906   06:17:45	-- target/filesystem.sh@102 -- # nvmfpid=
00:07:28.906  
00:07:28.906  real	0m13.930s
00:07:28.906  user	0m53.688s
00:07:28.906  sys	0m1.530s
00:07:28.906   06:17:45	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:28.906   06:17:45	-- common/autotest_common.sh@10 -- # set +x
00:07:28.906  ************************************
00:07:28.906  END TEST nvmf_filesystem_in_capsule
00:07:28.906  ************************************
00:07:29.164   06:17:45	-- target/filesystem.sh@108 -- # nvmftestfini
00:07:29.164   06:17:45	-- nvmf/common.sh@476 -- # nvmfcleanup
00:07:29.164   06:17:45	-- nvmf/common.sh@116 -- # sync
00:07:29.164   06:17:45	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:07:29.164   06:17:45	-- nvmf/common.sh@119 -- # set +e
00:07:29.164   06:17:45	-- nvmf/common.sh@120 -- # for i in {1..20}
00:07:29.164   06:17:45	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:07:29.164  rmmod nvme_tcp
00:07:29.164  rmmod nvme_fabrics
00:07:29.164  rmmod nvme_keyring
00:07:29.164   06:17:45	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:07:29.164   06:17:45	-- nvmf/common.sh@123 -- # set -e
00:07:29.164   06:17:45	-- nvmf/common.sh@124 -- # return 0
00:07:29.164   06:17:45	-- nvmf/common.sh@477 -- # '[' -n '' ']'
00:07:29.164   06:17:45	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:07:29.164   06:17:45	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:07:29.164   06:17:45	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:07:29.164   06:17:45	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:07:29.164   06:17:45	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:07:29.164   06:17:45	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:07:29.164   06:17:45	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:07:29.164    06:17:45	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:07:29.164   06:17:46	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:07:29.164  
00:07:29.164  real	0m29.439s
00:07:29.164  user	1m49.968s
00:07:29.164  sys	0m3.925s
00:07:29.164   06:17:46	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:29.164   06:17:46	-- common/autotest_common.sh@10 -- # set +x
00:07:29.164  ************************************
00:07:29.164  END TEST nvmf_filesystem
00:07:29.164  ************************************
00:07:29.164   06:17:46	-- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp
00:07:29.164   06:17:46	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:07:29.164   06:17:46	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:29.164   06:17:46	-- common/autotest_common.sh@10 -- # set +x
00:07:29.164  ************************************
00:07:29.164  START TEST nvmf_discovery
00:07:29.164  ************************************
00:07:29.164   06:17:46	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp
00:07:29.164  * Looking for test storage...
00:07:29.164  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:07:29.164    06:17:46	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:07:29.164     06:17:46	-- common/autotest_common.sh@1690 -- # lcov --version
00:07:29.164     06:17:46	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:07:29.423    06:17:46	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:07:29.423    06:17:46	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:07:29.423    06:17:46	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:07:29.423    06:17:46	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:07:29.423    06:17:46	-- scripts/common.sh@335 -- # IFS=.-:
00:07:29.423    06:17:46	-- scripts/common.sh@335 -- # read -ra ver1
00:07:29.423    06:17:46	-- scripts/common.sh@336 -- # IFS=.-:
00:07:29.423    06:17:46	-- scripts/common.sh@336 -- # read -ra ver2
00:07:29.423    06:17:46	-- scripts/common.sh@337 -- # local 'op=<'
00:07:29.423    06:17:46	-- scripts/common.sh@339 -- # ver1_l=2
00:07:29.423    06:17:46	-- scripts/common.sh@340 -- # ver2_l=1
00:07:29.423    06:17:46	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:07:29.423    06:17:46	-- scripts/common.sh@343 -- # case "$op" in
00:07:29.423    06:17:46	-- scripts/common.sh@344 -- # : 1
00:07:29.423    06:17:46	-- scripts/common.sh@363 -- # (( v = 0 ))
00:07:29.423    06:17:46	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:29.423     06:17:46	-- scripts/common.sh@364 -- # decimal 1
00:07:29.423     06:17:46	-- scripts/common.sh@352 -- # local d=1
00:07:29.423     06:17:46	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:29.423     06:17:46	-- scripts/common.sh@354 -- # echo 1
00:07:29.423    06:17:46	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:29.423     06:17:46	-- scripts/common.sh@365 -- # decimal 2
00:07:29.423     06:17:46	-- scripts/common.sh@352 -- # local d=2
00:07:29.423     06:17:46	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:29.423     06:17:46	-- scripts/common.sh@354 -- # echo 2
00:07:29.423    06:17:46	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:29.423    06:17:46	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:29.423    06:17:46	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:29.423    06:17:46	-- scripts/common.sh@367 -- # return 0
00:07:29.423    06:17:46	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:29.423    06:17:46	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:29.423  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:29.423  		--rc genhtml_branch_coverage=1
00:07:29.423  		--rc genhtml_function_coverage=1
00:07:29.423  		--rc genhtml_legend=1
00:07:29.423  		--rc geninfo_all_blocks=1
00:07:29.423  		--rc geninfo_unexecuted_blocks=1
00:07:29.423  		
00:07:29.423  		'
00:07:29.423    06:17:46	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:29.423  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:29.423  		--rc genhtml_branch_coverage=1
00:07:29.423  		--rc genhtml_function_coverage=1
00:07:29.423  		--rc genhtml_legend=1
00:07:29.423  		--rc geninfo_all_blocks=1
00:07:29.423  		--rc geninfo_unexecuted_blocks=1
00:07:29.423  		
00:07:29.423  		'
00:07:29.423    06:17:46	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:29.423  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:29.423  		--rc genhtml_branch_coverage=1
00:07:29.423  		--rc genhtml_function_coverage=1
00:07:29.423  		--rc genhtml_legend=1
00:07:29.423  		--rc geninfo_all_blocks=1
00:07:29.423  		--rc geninfo_unexecuted_blocks=1
00:07:29.423  		
00:07:29.423  		'
00:07:29.423    06:17:46	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:29.423  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:29.423  		--rc genhtml_branch_coverage=1
00:07:29.423  		--rc genhtml_function_coverage=1
00:07:29.423  		--rc genhtml_legend=1
00:07:29.423  		--rc geninfo_all_blocks=1
00:07:29.423  		--rc geninfo_unexecuted_blocks=1
00:07:29.423  		
00:07:29.423  		'
00:07:29.423   06:17:46	-- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:07:29.423     06:17:46	-- nvmf/common.sh@7 -- # uname -s
00:07:29.423    06:17:46	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:07:29.423    06:17:46	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:07:29.423    06:17:46	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:07:29.423    06:17:46	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:07:29.423    06:17:46	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:07:29.423    06:17:46	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:07:29.423    06:17:46	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:07:29.423    06:17:46	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:07:29.423    06:17:46	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:07:29.423     06:17:46	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:07:29.423    06:17:46	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:07:29.423    06:17:46	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:07:29.423    06:17:46	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:07:29.423    06:17:46	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:07:29.423    06:17:46	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:07:29.423    06:17:46	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:07:29.423     06:17:46	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:07:29.423     06:17:46	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:07:29.423     06:17:46	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:07:29.423      06:17:46	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:29.423      06:17:46	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:29.423      06:17:46	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:29.423      06:17:46	-- paths/export.sh@5 -- # export PATH
00:07:29.423      06:17:46	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:29.423    06:17:46	-- nvmf/common.sh@46 -- # : 0
00:07:29.423    06:17:46	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:07:29.423    06:17:46	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:07:29.423    06:17:46	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:07:29.423    06:17:46	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:07:29.423    06:17:46	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:07:29.423    06:17:46	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:07:29.423    06:17:46	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:07:29.423    06:17:46	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:07:29.423   06:17:46	-- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400
00:07:29.423   06:17:46	-- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512
00:07:29.423   06:17:46	-- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430
00:07:29.423   06:17:46	-- target/discovery.sh@15 -- # hash nvme
00:07:29.423   06:17:46	-- target/discovery.sh@20 -- # nvmftestinit
00:07:29.423   06:17:46	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:07:29.423   06:17:46	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:07:29.423   06:17:46	-- nvmf/common.sh@436 -- # prepare_net_devs
00:07:29.423   06:17:46	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:07:29.423   06:17:46	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:07:29.423   06:17:46	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:07:29.423   06:17:46	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:07:29.423    06:17:46	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:07:29.423   06:17:46	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:07:29.423   06:17:46	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:07:29.423   06:17:46	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:07:29.423   06:17:46	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:07:29.423   06:17:46	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:07:29.424   06:17:46	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:07:29.424   06:17:46	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:07:29.424   06:17:46	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:07:29.424   06:17:46	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:07:29.424   06:17:46	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:07:29.424   06:17:46	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:07:29.424   06:17:46	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:07:29.424   06:17:46	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:07:29.424   06:17:46	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:07:29.424   06:17:46	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:07:29.424   06:17:46	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:07:29.424   06:17:46	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:07:29.424   06:17:46	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:07:29.424   06:17:46	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:07:29.424   06:17:46	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:07:29.424  Cannot find device "nvmf_tgt_br"
00:07:29.424   06:17:46	-- nvmf/common.sh@154 -- # true
00:07:29.424   06:17:46	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:07:29.424  Cannot find device "nvmf_tgt_br2"
00:07:29.424   06:17:46	-- nvmf/common.sh@155 -- # true
00:07:29.424   06:17:46	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:07:29.424   06:17:46	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:07:29.424  Cannot find device "nvmf_tgt_br"
00:07:29.424   06:17:46	-- nvmf/common.sh@157 -- # true
00:07:29.424   06:17:46	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:07:29.424  Cannot find device "nvmf_tgt_br2"
00:07:29.424   06:17:46	-- nvmf/common.sh@158 -- # true
00:07:29.424   06:17:46	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:07:29.424   06:17:46	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:07:29.683   06:17:46	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:07:29.683  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:07:29.683   06:17:46	-- nvmf/common.sh@161 -- # true
00:07:29.683   06:17:46	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:07:29.683  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:07:29.683   06:17:46	-- nvmf/common.sh@162 -- # true
00:07:29.683   06:17:46	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:07:29.683   06:17:46	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:07:29.683   06:17:46	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:07:29.683   06:17:46	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:07:29.683   06:17:46	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:07:29.683   06:17:46	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:07:29.683   06:17:46	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:07:29.683   06:17:46	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:07:29.683   06:17:46	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:07:29.683   06:17:46	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:07:29.683   06:17:46	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:07:29.683   06:17:46	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:07:29.683   06:17:46	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:07:29.683   06:17:46	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:07:29.683   06:17:46	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:07:29.683   06:17:46	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:07:29.683   06:17:46	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:07:29.683   06:17:46	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:07:29.683   06:17:46	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:07:29.683   06:17:46	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:07:29.683   06:17:46	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:07:29.683   06:17:46	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:07:29.683   06:17:46	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:07:29.683   06:17:46	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:07:29.683  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:07:29.683  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms
00:07:29.683  
00:07:29.683  --- 10.0.0.2 ping statistics ---
00:07:29.683  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:07:29.683  rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms
00:07:29.683   06:17:46	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:07:29.683  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:07:29.683  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms
00:07:29.683  
00:07:29.683  --- 10.0.0.3 ping statistics ---
00:07:29.683  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:07:29.683  rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms
00:07:29.683   06:17:46	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:07:29.683  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:07:29.683  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms
00:07:29.683  
00:07:29.683  --- 10.0.0.1 ping statistics ---
00:07:29.683  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:07:29.683  rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms
00:07:29.684   06:17:46	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:07:29.684   06:17:46	-- nvmf/common.sh@421 -- # return 0
00:07:29.684   06:17:46	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:07:29.684   06:17:46	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:07:29.684   06:17:46	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:07:29.684   06:17:46	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:07:29.684   06:17:46	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:07:29.684   06:17:46	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:07:29.684   06:17:46	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:07:29.684   06:17:46	-- target/discovery.sh@21 -- # nvmfappstart -m 0xF
00:07:29.684   06:17:46	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:07:29.684   06:17:46	-- common/autotest_common.sh@722 -- # xtrace_disable
00:07:29.684   06:17:46	-- common/autotest_common.sh@10 -- # set +x
00:07:29.684   06:17:46	-- nvmf/common.sh@469 -- # nvmfpid=61378
00:07:29.684   06:17:46	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:07:29.684   06:17:46	-- nvmf/common.sh@470 -- # waitforlisten 61378
00:07:29.684   06:17:46	-- common/autotest_common.sh@829 -- # '[' -z 61378 ']'
00:07:29.684   06:17:46	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:29.684   06:17:46	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:29.684   06:17:46	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:29.684  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:29.684   06:17:46	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:29.684   06:17:46	-- common/autotest_common.sh@10 -- # set +x
00:07:29.684  [2024-12-16 06:17:46.637781] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:29.684  [2024-12-16 06:17:46.637877] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:07:29.943  [2024-12-16 06:17:46.775893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:07:29.943  [2024-12-16 06:17:46.874597] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:29.943  [2024-12-16 06:17:46.874766] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:07:29.943  [2024-12-16 06:17:46.874782] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:07:29.943  [2024-12-16 06:17:46.874793] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:07:29.943  [2024-12-16 06:17:46.875046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:07:29.943  [2024-12-16 06:17:46.875167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:07:29.943  [2024-12-16 06:17:46.875298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:07:29.943  [2024-12-16 06:17:46.875306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:30.915   06:17:47	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:30.915   06:17:47	-- common/autotest_common.sh@862 -- # return 0
00:07:30.915   06:17:47	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:07:30.915   06:17:47	-- common/autotest_common.sh@728 -- # xtrace_disable
00:07:30.915   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:30.915   06:17:47	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:07:30.915   06:17:47	-- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:07:30.915   06:17:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:30.915   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:30.915  [2024-12-16 06:17:47.626760] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:07:30.915   06:17:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:30.915    06:17:47	-- target/discovery.sh@26 -- # seq 1 4
00:07:30.915   06:17:47	-- target/discovery.sh@26 -- # for i in $(seq 1 4)
00:07:30.915   06:17:47	-- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512
00:07:30.915   06:17:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:30.915   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:30.915  Null1
00:07:30.915   06:17:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:30.915   06:17:47	-- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:07:30.915   06:17:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:30.915   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:30.915   06:17:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:30.915   06:17:47	-- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1
00:07:30.915   06:17:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:30.915   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:30.915   06:17:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:30.915   06:17:47	-- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:07:30.915   06:17:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:30.915   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:30.915  [2024-12-16 06:17:47.687599] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:07:30.915   06:17:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:30.915   06:17:47	-- target/discovery.sh@26 -- # for i in $(seq 1 4)
00:07:30.915   06:17:47	-- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512
00:07:30.915   06:17:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:30.915   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:30.915  Null2
00:07:30.915   06:17:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:30.915   06:17:47	-- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002
00:07:30.915   06:17:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:30.915   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:30.915   06:17:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:30.915   06:17:47	-- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2
00:07:30.915   06:17:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:30.915   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:30.915   06:17:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:30.915   06:17:47	-- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420
00:07:30.915   06:17:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:30.915   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:30.915   06:17:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:30.915   06:17:47	-- target/discovery.sh@26 -- # for i in $(seq 1 4)
00:07:30.915   06:17:47	-- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512
00:07:30.915   06:17:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:30.915   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:30.915  Null3
00:07:30.915   06:17:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:30.915   06:17:47	-- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003
00:07:30.915   06:17:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:30.915   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:30.915   06:17:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:30.915   06:17:47	-- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3
00:07:30.915   06:17:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:30.916   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:30.916   06:17:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:30.916   06:17:47	-- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420
00:07:30.916   06:17:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:30.916   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:30.916   06:17:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:30.916   06:17:47	-- target/discovery.sh@26 -- # for i in $(seq 1 4)
00:07:30.916   06:17:47	-- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512
00:07:30.916   06:17:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:30.916   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:30.916  Null4
00:07:30.916   06:17:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:30.916   06:17:47	-- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004
00:07:30.916   06:17:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:30.916   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:30.916   06:17:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:30.916   06:17:47	-- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4
00:07:30.916   06:17:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:30.916   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:30.916   06:17:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:30.916   06:17:47	-- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420
00:07:30.916   06:17:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:30.916   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:30.916   06:17:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:30.916   06:17:47	-- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:07:30.916   06:17:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:30.916   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:30.916   06:17:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:30.916   06:17:47	-- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430
00:07:30.916   06:17:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:30.916   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:30.916   06:17:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:30.916   06:17:47	-- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -a 10.0.0.2 -s 4420
00:07:31.175  
00:07:31.175  Discovery Log Number of Records 6, Generation counter 6
00:07:31.175  =====Discovery Log Entry 0======
00:07:31.175  trtype:  tcp
00:07:31.175  adrfam:  ipv4
00:07:31.175  subtype: current discovery subsystem
00:07:31.175  treq:    not required
00:07:31.175  portid:  0
00:07:31.175  trsvcid: 4420
00:07:31.175  subnqn:  nqn.2014-08.org.nvmexpress.discovery
00:07:31.175  traddr:  10.0.0.2
00:07:31.175  eflags:  explicit discovery connections, duplicate discovery information
00:07:31.175  sectype: none
00:07:31.175  =====Discovery Log Entry 1======
00:07:31.175  trtype:  tcp
00:07:31.175  adrfam:  ipv4
00:07:31.175  subtype: nvme subsystem
00:07:31.175  treq:    not required
00:07:31.175  portid:  0
00:07:31.175  trsvcid: 4420
00:07:31.175  subnqn:  nqn.2016-06.io.spdk:cnode1
00:07:31.175  traddr:  10.0.0.2
00:07:31.175  eflags:  none
00:07:31.175  sectype: none
00:07:31.175  =====Discovery Log Entry 2======
00:07:31.175  trtype:  tcp
00:07:31.175  adrfam:  ipv4
00:07:31.175  subtype: nvme subsystem
00:07:31.175  treq:    not required
00:07:31.175  portid:  0
00:07:31.175  trsvcid: 4420
00:07:31.175  subnqn:  nqn.2016-06.io.spdk:cnode2
00:07:31.175  traddr:  10.0.0.2
00:07:31.175  eflags:  none
00:07:31.175  sectype: none
00:07:31.175  =====Discovery Log Entry 3======
00:07:31.175  trtype:  tcp
00:07:31.175  adrfam:  ipv4
00:07:31.175  subtype: nvme subsystem
00:07:31.175  treq:    not required
00:07:31.175  portid:  0
00:07:31.175  trsvcid: 4420
00:07:31.175  subnqn:  nqn.2016-06.io.spdk:cnode3
00:07:31.175  traddr:  10.0.0.2
00:07:31.175  eflags:  none
00:07:31.175  sectype: none
00:07:31.175  =====Discovery Log Entry 4======
00:07:31.175  trtype:  tcp
00:07:31.175  adrfam:  ipv4
00:07:31.175  subtype: nvme subsystem
00:07:31.175  treq:    not required
00:07:31.175  portid:  0
00:07:31.175  trsvcid: 4420
00:07:31.175  subnqn:  nqn.2016-06.io.spdk:cnode4
00:07:31.175  traddr:  10.0.0.2
00:07:31.175  eflags:  none
00:07:31.175  sectype: none
00:07:31.175  =====Discovery Log Entry 5======
00:07:31.175  trtype:  tcp
00:07:31.175  adrfam:  ipv4
00:07:31.175  subtype: discovery subsystem referral
00:07:31.175  treq:    not required
00:07:31.175  portid:  0
00:07:31.175  trsvcid: 4430
00:07:31.175  subnqn:  nqn.2014-08.org.nvmexpress.discovery
00:07:31.175  traddr:  10.0.0.2
00:07:31.175  eflags:  none
00:07:31.175  sectype: none
00:07:31.175  Perform nvmf subsystem discovery via RPC
00:07:31.175   06:17:47	-- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC'
00:07:31.175   06:17:47	-- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems
00:07:31.175   06:17:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:31.175   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:31.175  [2024-12-16 06:17:47.923781] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05
00:07:31.175  [
00:07:31.175  {
00:07:31.175  "allow_any_host": true,
00:07:31.175  "hosts": [],
00:07:31.175  "listen_addresses": [
00:07:31.175  {
00:07:31.175  "adrfam": "IPv4",
00:07:31.175  "traddr": "10.0.0.2",
00:07:31.175  "transport": "TCP",
00:07:31.175  "trsvcid": "4420",
00:07:31.175  "trtype": "TCP"
00:07:31.175  }
00:07:31.175  ],
00:07:31.175  "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:07:31.175  "subtype": "Discovery"
00:07:31.175  },
00:07:31.175  {
00:07:31.175  "allow_any_host": true,
00:07:31.175  "hosts": [],
00:07:31.175  "listen_addresses": [
00:07:31.175  {
00:07:31.175  "adrfam": "IPv4",
00:07:31.175  "traddr": "10.0.0.2",
00:07:31.175  "transport": "TCP",
00:07:31.175  "trsvcid": "4420",
00:07:31.175  "trtype": "TCP"
00:07:31.175  }
00:07:31.175  ],
00:07:31.175  "max_cntlid": 65519,
00:07:31.175  "max_namespaces": 32,
00:07:31.175  "min_cntlid": 1,
00:07:31.175  "model_number": "SPDK bdev Controller",
00:07:31.175  "namespaces": [
00:07:31.175  {
00:07:31.175  "bdev_name": "Null1",
00:07:31.175  "name": "Null1",
00:07:31.175  "nguid": "2F183153350642BFAF528978A59B2F13",
00:07:31.175  "nsid": 1,
00:07:31.175  "uuid": "2f183153-3506-42bf-af52-8978a59b2f13"
00:07:31.175  }
00:07:31.175  ],
00:07:31.175  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:07:31.175  "serial_number": "SPDK00000000000001",
00:07:31.175  "subtype": "NVMe"
00:07:31.175  },
00:07:31.175  {
00:07:31.175  "allow_any_host": true,
00:07:31.175  "hosts": [],
00:07:31.175  "listen_addresses": [
00:07:31.175  {
00:07:31.175  "adrfam": "IPv4",
00:07:31.175  "traddr": "10.0.0.2",
00:07:31.175  "transport": "TCP",
00:07:31.175  "trsvcid": "4420",
00:07:31.175  "trtype": "TCP"
00:07:31.175  }
00:07:31.175  ],
00:07:31.175  "max_cntlid": 65519,
00:07:31.175  "max_namespaces": 32,
00:07:31.175  "min_cntlid": 1,
00:07:31.175  "model_number": "SPDK bdev Controller",
00:07:31.175  "namespaces": [
00:07:31.175  {
00:07:31.175  "bdev_name": "Null2",
00:07:31.175  "name": "Null2",
00:07:31.175  "nguid": "79C33DF5FC99451FBE1466601347DE8D",
00:07:31.175  "nsid": 1,
00:07:31.175  "uuid": "79c33df5-fc99-451f-be14-66601347de8d"
00:07:31.175  }
00:07:31.175  ],
00:07:31.175  "nqn": "nqn.2016-06.io.spdk:cnode2",
00:07:31.175  "serial_number": "SPDK00000000000002",
00:07:31.175  "subtype": "NVMe"
00:07:31.175  },
00:07:31.175  {
00:07:31.175  "allow_any_host": true,
00:07:31.175  "hosts": [],
00:07:31.175  "listen_addresses": [
00:07:31.175  {
00:07:31.175  "adrfam": "IPv4",
00:07:31.175  "traddr": "10.0.0.2",
00:07:31.175  "transport": "TCP",
00:07:31.175  "trsvcid": "4420",
00:07:31.175  "trtype": "TCP"
00:07:31.175  }
00:07:31.175  ],
00:07:31.175  "max_cntlid": 65519,
00:07:31.175  "max_namespaces": 32,
00:07:31.175  "min_cntlid": 1,
00:07:31.175  "model_number": "SPDK bdev Controller",
00:07:31.175  "namespaces": [
00:07:31.175  {
00:07:31.175  "bdev_name": "Null3",
00:07:31.176  "name": "Null3",
00:07:31.176  "nguid": "8AFDB52C4142494E92557A76471FC93D",
00:07:31.176  "nsid": 1,
00:07:31.176  "uuid": "8afdb52c-4142-494e-9255-7a76471fc93d"
00:07:31.176  }
00:07:31.176  ],
00:07:31.176  "nqn": "nqn.2016-06.io.spdk:cnode3",
00:07:31.176  "serial_number": "SPDK00000000000003",
00:07:31.176  "subtype": "NVMe"
00:07:31.176  },
00:07:31.176  {
00:07:31.176  "allow_any_host": true,
00:07:31.176  "hosts": [],
00:07:31.176  "listen_addresses": [
00:07:31.176  {
00:07:31.176  "adrfam": "IPv4",
00:07:31.176  "traddr": "10.0.0.2",
00:07:31.176  "transport": "TCP",
00:07:31.176  "trsvcid": "4420",
00:07:31.176  "trtype": "TCP"
00:07:31.176  }
00:07:31.176  ],
00:07:31.176  "max_cntlid": 65519,
00:07:31.176  "max_namespaces": 32,
00:07:31.176  "min_cntlid": 1,
00:07:31.176  "model_number": "SPDK bdev Controller",
00:07:31.176  "namespaces": [
00:07:31.176  {
00:07:31.176  "bdev_name": "Null4",
00:07:31.176  "name": "Null4",
00:07:31.176  "nguid": "F211351F0EC54205976A55B986216645",
00:07:31.176  "nsid": 1,
00:07:31.176  "uuid": "f211351f-0ec5-4205-976a-55b986216645"
00:07:31.176  }
00:07:31.176  ],
00:07:31.176  "nqn": "nqn.2016-06.io.spdk:cnode4",
00:07:31.176  "serial_number": "SPDK00000000000004",
00:07:31.176  "subtype": "NVMe"
00:07:31.176  }
00:07:31.176  ]
00:07:31.176   06:17:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:31.176    06:17:47	-- target/discovery.sh@42 -- # seq 1 4
00:07:31.176   06:17:47	-- target/discovery.sh@42 -- # for i in $(seq 1 4)
00:07:31.176   06:17:47	-- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:07:31.176   06:17:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:31.176   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:31.176   06:17:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:31.176   06:17:47	-- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1
00:07:31.176   06:17:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:31.176   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:31.176   06:17:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:31.176   06:17:47	-- target/discovery.sh@42 -- # for i in $(seq 1 4)
00:07:31.176   06:17:47	-- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2
00:07:31.176   06:17:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:31.176   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:31.176   06:17:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:31.176   06:17:47	-- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2
00:07:31.176   06:17:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:31.176   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:31.176   06:17:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:31.176   06:17:47	-- target/discovery.sh@42 -- # for i in $(seq 1 4)
00:07:31.176   06:17:47	-- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3
00:07:31.176   06:17:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:31.176   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:31.176   06:17:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:31.176   06:17:47	-- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3
00:07:31.176   06:17:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:31.176   06:17:47	-- common/autotest_common.sh@10 -- # set +x
00:07:31.176   06:17:48	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:31.176   06:17:48	-- target/discovery.sh@42 -- # for i in $(seq 1 4)
00:07:31.176   06:17:48	-- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4
00:07:31.176   06:17:48	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:31.176   06:17:48	-- common/autotest_common.sh@10 -- # set +x
00:07:31.176   06:17:48	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:31.176   06:17:48	-- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4
00:07:31.176   06:17:48	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:31.176   06:17:48	-- common/autotest_common.sh@10 -- # set +x
00:07:31.176   06:17:48	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:31.176   06:17:48	-- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430
00:07:31.176   06:17:48	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:31.176   06:17:48	-- common/autotest_common.sh@10 -- # set +x
00:07:31.176   06:17:48	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:31.176    06:17:48	-- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs
00:07:31.176    06:17:48	-- target/discovery.sh@49 -- # jq -r '.[].name'
00:07:31.176    06:17:48	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:31.176    06:17:48	-- common/autotest_common.sh@10 -- # set +x
00:07:31.176    06:17:48	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:31.176   06:17:48	-- target/discovery.sh@49 -- # check_bdevs=
00:07:31.176   06:17:48	-- target/discovery.sh@50 -- # '[' -n '' ']'
00:07:31.176   06:17:48	-- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT
00:07:31.176   06:17:48	-- target/discovery.sh@57 -- # nvmftestfini
00:07:31.176   06:17:48	-- nvmf/common.sh@476 -- # nvmfcleanup
00:07:31.176   06:17:48	-- nvmf/common.sh@116 -- # sync
00:07:31.176   06:17:48	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:07:31.176   06:17:48	-- nvmf/common.sh@119 -- # set +e
00:07:31.176   06:17:48	-- nvmf/common.sh@120 -- # for i in {1..20}
00:07:31.176   06:17:48	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:07:31.176  rmmod nvme_tcp
00:07:31.176  rmmod nvme_fabrics
00:07:31.176  rmmod nvme_keyring
00:07:31.176   06:17:48	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:07:31.176   06:17:48	-- nvmf/common.sh@123 -- # set -e
00:07:31.176   06:17:48	-- nvmf/common.sh@124 -- # return 0
00:07:31.176   06:17:48	-- nvmf/common.sh@477 -- # '[' -n 61378 ']'
00:07:31.176   06:17:48	-- nvmf/common.sh@478 -- # killprocess 61378
00:07:31.176   06:17:48	-- common/autotest_common.sh@936 -- # '[' -z 61378 ']'
00:07:31.176   06:17:48	-- common/autotest_common.sh@940 -- # kill -0 61378
00:07:31.435    06:17:48	-- common/autotest_common.sh@941 -- # uname
00:07:31.435   06:17:48	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:07:31.435    06:17:48	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61378
00:07:31.435   06:17:48	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:07:31.435   06:17:48	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:07:31.435  killing process with pid 61378
00:07:31.435   06:17:48	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 61378'
00:07:31.435   06:17:48	-- common/autotest_common.sh@955 -- # kill 61378
00:07:31.435  [2024-12-16 06:17:48.182720] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times
00:07:31.435   06:17:48	-- common/autotest_common.sh@960 -- # wait 61378
00:07:31.435   06:17:48	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:07:31.435   06:17:48	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:07:31.435   06:17:48	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:07:31.435   06:17:48	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:07:31.435   06:17:48	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:07:31.435   06:17:48	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:07:31.435   06:17:48	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:07:31.435    06:17:48	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:07:31.693   06:17:48	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:07:31.693  
00:07:31.693  real	0m2.387s
00:07:31.693  user	0m6.368s
00:07:31.693  sys	0m0.645s
00:07:31.693   06:17:48	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:31.693   06:17:48	-- common/autotest_common.sh@10 -- # set +x
00:07:31.693  ************************************
00:07:31.693  END TEST nvmf_discovery
00:07:31.693  ************************************
00:07:31.693   06:17:48	-- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp
00:07:31.693   06:17:48	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:07:31.693   06:17:48	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:31.693   06:17:48	-- common/autotest_common.sh@10 -- # set +x
00:07:31.693  ************************************
00:07:31.693  START TEST nvmf_referrals
00:07:31.693  ************************************
00:07:31.693   06:17:48	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp
00:07:31.693  * Looking for test storage...
00:07:31.693  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:07:31.693    06:17:48	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:07:31.693     06:17:48	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:07:31.693     06:17:48	-- common/autotest_common.sh@1690 -- # lcov --version
00:07:31.693    06:17:48	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:07:31.693    06:17:48	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:07:31.693    06:17:48	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:07:31.693    06:17:48	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:07:31.693    06:17:48	-- scripts/common.sh@335 -- # IFS=.-:
00:07:31.693    06:17:48	-- scripts/common.sh@335 -- # read -ra ver1
00:07:31.693    06:17:48	-- scripts/common.sh@336 -- # IFS=.-:
00:07:31.693    06:17:48	-- scripts/common.sh@336 -- # read -ra ver2
00:07:31.693    06:17:48	-- scripts/common.sh@337 -- # local 'op=<'
00:07:31.693    06:17:48	-- scripts/common.sh@339 -- # ver1_l=2
00:07:31.693    06:17:48	-- scripts/common.sh@340 -- # ver2_l=1
00:07:31.693    06:17:48	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:07:31.693    06:17:48	-- scripts/common.sh@343 -- # case "$op" in
00:07:31.693    06:17:48	-- scripts/common.sh@344 -- # : 1
00:07:31.694    06:17:48	-- scripts/common.sh@363 -- # (( v = 0 ))
00:07:31.694    06:17:48	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:31.694     06:17:48	-- scripts/common.sh@364 -- # decimal 1
00:07:31.694     06:17:48	-- scripts/common.sh@352 -- # local d=1
00:07:31.694     06:17:48	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:31.694     06:17:48	-- scripts/common.sh@354 -- # echo 1
00:07:31.694    06:17:48	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:31.694     06:17:48	-- scripts/common.sh@365 -- # decimal 2
00:07:31.694     06:17:48	-- scripts/common.sh@352 -- # local d=2
00:07:31.694     06:17:48	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:31.694     06:17:48	-- scripts/common.sh@354 -- # echo 2
00:07:31.952    06:17:48	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:31.952    06:17:48	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:31.952    06:17:48	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:31.952    06:17:48	-- scripts/common.sh@367 -- # return 0
00:07:31.952    06:17:48	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:31.952    06:17:48	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:31.952  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:31.952  		--rc genhtml_branch_coverage=1
00:07:31.952  		--rc genhtml_function_coverage=1
00:07:31.952  		--rc genhtml_legend=1
00:07:31.952  		--rc geninfo_all_blocks=1
00:07:31.952  		--rc geninfo_unexecuted_blocks=1
00:07:31.952  		
00:07:31.952  		'
00:07:31.952    06:17:48	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:31.952  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:31.952  		--rc genhtml_branch_coverage=1
00:07:31.952  		--rc genhtml_function_coverage=1
00:07:31.952  		--rc genhtml_legend=1
00:07:31.952  		--rc geninfo_all_blocks=1
00:07:31.952  		--rc geninfo_unexecuted_blocks=1
00:07:31.952  		
00:07:31.952  		'
00:07:31.953    06:17:48	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:31.953  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:31.953  		--rc genhtml_branch_coverage=1
00:07:31.953  		--rc genhtml_function_coverage=1
00:07:31.953  		--rc genhtml_legend=1
00:07:31.953  		--rc geninfo_all_blocks=1
00:07:31.953  		--rc geninfo_unexecuted_blocks=1
00:07:31.953  		
00:07:31.953  		'
00:07:31.953    06:17:48	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:31.953  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:31.953  		--rc genhtml_branch_coverage=1
00:07:31.953  		--rc genhtml_function_coverage=1
00:07:31.953  		--rc genhtml_legend=1
00:07:31.953  		--rc geninfo_all_blocks=1
00:07:31.953  		--rc geninfo_unexecuted_blocks=1
00:07:31.953  		
00:07:31.953  		'
00:07:31.953   06:17:48	-- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:07:31.953     06:17:48	-- nvmf/common.sh@7 -- # uname -s
00:07:31.953    06:17:48	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:07:31.953    06:17:48	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:07:31.953    06:17:48	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:07:31.953    06:17:48	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:07:31.953    06:17:48	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:07:31.953    06:17:48	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:07:31.953    06:17:48	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:07:31.953    06:17:48	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:07:31.953    06:17:48	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:07:31.953     06:17:48	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:07:31.953    06:17:48	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:07:31.953    06:17:48	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:07:31.953    06:17:48	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:07:31.953    06:17:48	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:07:31.953    06:17:48	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:07:31.953    06:17:48	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:07:31.953     06:17:48	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:07:31.953     06:17:48	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:07:31.953     06:17:48	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:07:31.953      06:17:48	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:31.953      06:17:48	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:31.953      06:17:48	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:31.953      06:17:48	-- paths/export.sh@5 -- # export PATH
00:07:31.953      06:17:48	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:31.953    06:17:48	-- nvmf/common.sh@46 -- # : 0
00:07:31.953    06:17:48	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:07:31.953    06:17:48	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:07:31.953    06:17:48	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:07:31.953    06:17:48	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:07:31.953    06:17:48	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:07:31.953    06:17:48	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:07:31.953    06:17:48	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:07:31.953    06:17:48	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:07:31.953   06:17:48	-- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2
00:07:31.953   06:17:48	-- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3
00:07:31.953   06:17:48	-- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4
00:07:31.953   06:17:48	-- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430
00:07:31.953   06:17:48	-- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery
00:07:31.953   06:17:48	-- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1
00:07:31.953   06:17:48	-- target/referrals.sh@37 -- # nvmftestinit
00:07:31.953   06:17:48	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:07:31.953   06:17:48	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:07:31.953   06:17:48	-- nvmf/common.sh@436 -- # prepare_net_devs
00:07:31.953   06:17:48	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:07:31.953   06:17:48	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:07:31.953   06:17:48	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:07:31.953   06:17:48	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:07:31.953    06:17:48	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:07:31.953   06:17:48	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:07:31.953   06:17:48	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:07:31.953   06:17:48	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:07:31.953   06:17:48	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:07:31.953   06:17:48	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:07:31.953   06:17:48	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:07:31.953   06:17:48	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:07:31.953   06:17:48	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:07:31.953   06:17:48	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:07:31.953   06:17:48	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:07:31.953   06:17:48	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:07:31.953   06:17:48	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:07:31.953   06:17:48	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:07:31.953   06:17:48	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:07:31.953   06:17:48	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:07:31.953   06:17:48	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:07:31.953   06:17:48	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:07:31.953   06:17:48	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:07:31.953   06:17:48	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:07:31.953   06:17:48	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:07:31.953  Cannot find device "nvmf_tgt_br"
00:07:31.953   06:17:48	-- nvmf/common.sh@154 -- # true
00:07:31.953   06:17:48	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:07:31.953  Cannot find device "nvmf_tgt_br2"
00:07:31.953   06:17:48	-- nvmf/common.sh@155 -- # true
00:07:31.953   06:17:48	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:07:31.953   06:17:48	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:07:31.953  Cannot find device "nvmf_tgt_br"
00:07:31.953   06:17:48	-- nvmf/common.sh@157 -- # true
00:07:31.953   06:17:48	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:07:31.953  Cannot find device "nvmf_tgt_br2"
00:07:31.953   06:17:48	-- nvmf/common.sh@158 -- # true
00:07:31.953   06:17:48	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:07:31.953   06:17:48	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:07:31.953   06:17:48	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:07:31.953  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:07:31.953   06:17:48	-- nvmf/common.sh@161 -- # true
00:07:31.953   06:17:48	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:07:31.953  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:07:31.953   06:17:48	-- nvmf/common.sh@162 -- # true
00:07:31.953   06:17:48	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:07:31.953   06:17:48	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:07:31.953   06:17:48	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:07:31.953   06:17:48	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:07:31.953   06:17:48	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:07:31.953   06:17:48	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:07:31.953   06:17:48	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:07:31.953   06:17:48	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:07:31.953   06:17:48	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:07:31.953   06:17:48	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:07:32.212   06:17:48	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:07:32.212   06:17:48	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:07:32.212   06:17:48	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:07:32.212   06:17:48	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:07:32.212   06:17:48	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:07:32.212   06:17:48	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:07:32.212   06:17:48	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:07:32.212   06:17:48	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:07:32.212   06:17:48	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:07:32.212   06:17:48	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:07:32.212   06:17:48	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:07:32.212   06:17:49	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:07:32.212   06:17:49	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:07:32.212   06:17:49	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:07:32.212  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:07:32.212  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms
00:07:32.212  
00:07:32.212  --- 10.0.0.2 ping statistics ---
00:07:32.212  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:07:32.212  rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms
00:07:32.212   06:17:49	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:07:32.212  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:07:32.212  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms
00:07:32.212  
00:07:32.212  --- 10.0.0.3 ping statistics ---
00:07:32.212  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:07:32.212  rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms
00:07:32.212   06:17:49	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:07:32.212  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:07:32.212  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms
00:07:32.212  
00:07:32.212  --- 10.0.0.1 ping statistics ---
00:07:32.212  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:07:32.212  rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms
00:07:32.212   06:17:49	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:07:32.212   06:17:49	-- nvmf/common.sh@421 -- # return 0
00:07:32.212   06:17:49	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:07:32.212   06:17:49	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:07:32.212   06:17:49	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:07:32.212   06:17:49	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:07:32.212   06:17:49	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:07:32.213   06:17:49	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:07:32.213   06:17:49	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:07:32.213   06:17:49	-- target/referrals.sh@38 -- # nvmfappstart -m 0xF
00:07:32.213   06:17:49	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:07:32.213   06:17:49	-- common/autotest_common.sh@722 -- # xtrace_disable
00:07:32.213   06:17:49	-- common/autotest_common.sh@10 -- # set +x
00:07:32.213   06:17:49	-- nvmf/common.sh@469 -- # nvmfpid=61609
00:07:32.213   06:17:49	-- nvmf/common.sh@470 -- # waitforlisten 61609
00:07:32.213   06:17:49	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:07:32.213   06:17:49	-- common/autotest_common.sh@829 -- # '[' -z 61609 ']'
00:07:32.213   06:17:49	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:32.213   06:17:49	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:32.213  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:32.213   06:17:49	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:32.213   06:17:49	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:32.213   06:17:49	-- common/autotest_common.sh@10 -- # set +x
00:07:32.213  [2024-12-16 06:17:49.119105] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:32.213  [2024-12-16 06:17:49.119205] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:07:32.472  [2024-12-16 06:17:49.261130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:07:32.472  [2024-12-16 06:17:49.336745] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:32.472  [2024-12-16 06:17:49.336888] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:07:32.472  [2024-12-16 06:17:49.336900] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:07:32.472  [2024-12-16 06:17:49.336910] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:07:32.472  [2024-12-16 06:17:49.337064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:07:32.472  [2024-12-16 06:17:49.337482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:07:32.472  [2024-12-16 06:17:49.337776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:07:32.472  [2024-12-16 06:17:49.337781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:33.407   06:17:50	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:33.407   06:17:50	-- common/autotest_common.sh@862 -- # return 0
00:07:33.407   06:17:50	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:07:33.407   06:17:50	-- common/autotest_common.sh@728 -- # xtrace_disable
00:07:33.407   06:17:50	-- common/autotest_common.sh@10 -- # set +x
00:07:33.407   06:17:50	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:07:33.407   06:17:50	-- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:07:33.407   06:17:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:33.407   06:17:50	-- common/autotest_common.sh@10 -- # set +x
00:07:33.407  [2024-12-16 06:17:50.125103] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:07:33.407   06:17:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:33.407   06:17:50	-- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery
00:07:33.407   06:17:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:33.407   06:17:50	-- common/autotest_common.sh@10 -- # set +x
00:07:33.407  [2024-12-16 06:17:50.153519] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 ***
00:07:33.407   06:17:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:33.407   06:17:50	-- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430
00:07:33.407   06:17:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:33.407   06:17:50	-- common/autotest_common.sh@10 -- # set +x
00:07:33.407   06:17:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:33.407   06:17:50	-- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430
00:07:33.407   06:17:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:33.407   06:17:50	-- common/autotest_common.sh@10 -- # set +x
00:07:33.407   06:17:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:33.407   06:17:50	-- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430
00:07:33.407   06:17:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:33.407   06:17:50	-- common/autotest_common.sh@10 -- # set +x
00:07:33.407   06:17:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:33.407    06:17:50	-- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals
00:07:33.407    06:17:50	-- target/referrals.sh@48 -- # jq length
00:07:33.407    06:17:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:33.407    06:17:50	-- common/autotest_common.sh@10 -- # set +x
00:07:33.407    06:17:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:33.407   06:17:50	-- target/referrals.sh@48 -- # (( 3 == 3 ))
00:07:33.407    06:17:50	-- target/referrals.sh@49 -- # get_referral_ips rpc
00:07:33.407    06:17:50	-- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]]
00:07:33.407     06:17:50	-- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals
00:07:33.407     06:17:50	-- target/referrals.sh@21 -- # jq -r '.[].address.traddr'
00:07:33.407     06:17:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:33.407     06:17:50	-- target/referrals.sh@21 -- # sort
00:07:33.407     06:17:50	-- common/autotest_common.sh@10 -- # set +x
00:07:33.407     06:17:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:33.407    06:17:50	-- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4
00:07:33.407   06:17:50	-- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]]
00:07:33.407    06:17:50	-- target/referrals.sh@50 -- # get_referral_ips nvme
00:07:33.407    06:17:50	-- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]]
00:07:33.407    06:17:50	-- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]]
00:07:33.407     06:17:50	-- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -a 10.0.0.2 -s 8009 -o json
00:07:33.407     06:17:50	-- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr'
00:07:33.407     06:17:50	-- target/referrals.sh@26 -- # sort
00:07:33.666    06:17:50	-- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4
00:07:33.666   06:17:50	-- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]]
00:07:33.666   06:17:50	-- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430
00:07:33.666   06:17:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:33.666   06:17:50	-- common/autotest_common.sh@10 -- # set +x
00:07:33.666   06:17:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:33.666   06:17:50	-- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430
00:07:33.666   06:17:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:33.666   06:17:50	-- common/autotest_common.sh@10 -- # set +x
00:07:33.666   06:17:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:33.666   06:17:50	-- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430
00:07:33.666   06:17:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:33.666   06:17:50	-- common/autotest_common.sh@10 -- # set +x
00:07:33.666   06:17:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:33.666    06:17:50	-- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals
00:07:33.666    06:17:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:33.666    06:17:50	-- target/referrals.sh@56 -- # jq length
00:07:33.666    06:17:50	-- common/autotest_common.sh@10 -- # set +x
00:07:33.666    06:17:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:33.666   06:17:50	-- target/referrals.sh@56 -- # (( 0 == 0 ))
00:07:33.666    06:17:50	-- target/referrals.sh@57 -- # get_referral_ips nvme
00:07:33.666    06:17:50	-- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]]
00:07:33.666    06:17:50	-- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]]
00:07:33.666     06:17:50	-- target/referrals.sh@26 -- # sort
00:07:33.666     06:17:50	-- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -a 10.0.0.2 -s 8009 -o json
00:07:33.666     06:17:50	-- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr'
00:07:33.925    06:17:50	-- target/referrals.sh@26 -- # echo
00:07:33.925   06:17:50	-- target/referrals.sh@57 -- # [[ '' == '' ]]
00:07:33.925   06:17:50	-- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery
00:07:33.925   06:17:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:33.925   06:17:50	-- common/autotest_common.sh@10 -- # set +x
00:07:33.925   06:17:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:33.925   06:17:50	-- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1
00:07:33.925   06:17:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:33.925   06:17:50	-- common/autotest_common.sh@10 -- # set +x
00:07:33.925   06:17:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:33.925    06:17:50	-- target/referrals.sh@65 -- # get_referral_ips rpc
00:07:33.925    06:17:50	-- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]]
00:07:33.925     06:17:50	-- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals
00:07:33.925     06:17:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:33.925     06:17:50	-- target/referrals.sh@21 -- # jq -r '.[].address.traddr'
00:07:33.925     06:17:50	-- target/referrals.sh@21 -- # sort
00:07:33.925     06:17:50	-- common/autotest_common.sh@10 -- # set +x
00:07:33.925     06:17:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:33.925    06:17:50	-- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2
00:07:33.925   06:17:50	-- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]]
00:07:33.925    06:17:50	-- target/referrals.sh@66 -- # get_referral_ips nvme
00:07:33.925    06:17:50	-- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]]
00:07:33.925    06:17:50	-- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]]
00:07:33.925     06:17:50	-- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -a 10.0.0.2 -s 8009 -o json
00:07:33.925     06:17:50	-- target/referrals.sh@26 -- # sort
00:07:33.925     06:17:50	-- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr'
00:07:33.925    06:17:50	-- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2
00:07:33.925   06:17:50	-- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]]
00:07:33.925    06:17:50	-- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem'
00:07:33.925    06:17:50	-- target/referrals.sh@31 -- # local 'subtype=nvme subsystem'
00:07:33.925    06:17:50	-- target/referrals.sh@67 -- # jq -r .subnqn
00:07:33.925    06:17:50	-- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -a 10.0.0.2 -s 8009 -o json
00:07:33.925    06:17:50	-- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")'
00:07:34.184   06:17:50	-- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]]
00:07:34.184    06:17:50	-- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral'
00:07:34.184    06:17:50	-- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral'
00:07:34.184    06:17:50	-- target/referrals.sh@68 -- # jq -r .subnqn
00:07:34.184    06:17:50	-- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -a 10.0.0.2 -s 8009 -o json
00:07:34.184    06:17:50	-- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")'
00:07:34.184   06:17:51	-- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]]
00:07:34.184   06:17:51	-- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1
00:07:34.184   06:17:51	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:34.184   06:17:51	-- common/autotest_common.sh@10 -- # set +x
00:07:34.184   06:17:51	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:34.184    06:17:51	-- target/referrals.sh@73 -- # get_referral_ips rpc
00:07:34.184    06:17:51	-- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]]
00:07:34.184     06:17:51	-- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals
00:07:34.184     06:17:51	-- target/referrals.sh@21 -- # jq -r '.[].address.traddr'
00:07:34.184     06:17:51	-- target/referrals.sh@21 -- # sort
00:07:34.184     06:17:51	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:34.184     06:17:51	-- common/autotest_common.sh@10 -- # set +x
00:07:34.184     06:17:51	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:34.184    06:17:51	-- target/referrals.sh@21 -- # echo 127.0.0.2
00:07:34.184   06:17:51	-- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]]
00:07:34.184    06:17:51	-- target/referrals.sh@74 -- # get_referral_ips nvme
00:07:34.184    06:17:51	-- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]]
00:07:34.184    06:17:51	-- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]]
00:07:34.184     06:17:51	-- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -a 10.0.0.2 -s 8009 -o json
00:07:34.184     06:17:51	-- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr'
00:07:34.184     06:17:51	-- target/referrals.sh@26 -- # sort
00:07:34.443    06:17:51	-- target/referrals.sh@26 -- # echo 127.0.0.2
00:07:34.443   06:17:51	-- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]]
00:07:34.443    06:17:51	-- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem'
00:07:34.443    06:17:51	-- target/referrals.sh@75 -- # jq -r .subnqn
00:07:34.443    06:17:51	-- target/referrals.sh@31 -- # local 'subtype=nvme subsystem'
00:07:34.443    06:17:51	-- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")'
00:07:34.443    06:17:51	-- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -a 10.0.0.2 -s 8009 -o json
00:07:34.443   06:17:51	-- target/referrals.sh@75 -- # [[ '' == '' ]]
00:07:34.443    06:17:51	-- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral'
00:07:34.443    06:17:51	-- target/referrals.sh@76 -- # jq -r .subnqn
00:07:34.443    06:17:51	-- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral'
00:07:34.443    06:17:51	-- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -a 10.0.0.2 -s 8009 -o json
00:07:34.443    06:17:51	-- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")'
00:07:34.701   06:17:51	-- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]]
00:07:34.701   06:17:51	-- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery
00:07:34.701   06:17:51	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:34.701   06:17:51	-- common/autotest_common.sh@10 -- # set +x
00:07:34.701   06:17:51	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:34.701    06:17:51	-- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals
00:07:34.701    06:17:51	-- target/referrals.sh@82 -- # jq length
00:07:34.701    06:17:51	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:34.701    06:17:51	-- common/autotest_common.sh@10 -- # set +x
00:07:34.701    06:17:51	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:34.702   06:17:51	-- target/referrals.sh@82 -- # (( 0 == 0 ))
00:07:34.702    06:17:51	-- target/referrals.sh@83 -- # get_referral_ips nvme
00:07:34.702    06:17:51	-- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]]
00:07:34.702    06:17:51	-- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]]
00:07:34.702     06:17:51	-- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -a 10.0.0.2 -s 8009 -o json
00:07:34.702     06:17:51	-- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr'
00:07:34.702     06:17:51	-- target/referrals.sh@26 -- # sort
00:07:34.960    06:17:51	-- target/referrals.sh@26 -- # echo
00:07:34.960   06:17:51	-- target/referrals.sh@83 -- # [[ '' == '' ]]
00:07:34.960   06:17:51	-- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT
00:07:34.960   06:17:51	-- target/referrals.sh@86 -- # nvmftestfini
00:07:34.960   06:17:51	-- nvmf/common.sh@476 -- # nvmfcleanup
00:07:34.960   06:17:51	-- nvmf/common.sh@116 -- # sync
00:07:34.960   06:17:51	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:07:34.960   06:17:51	-- nvmf/common.sh@119 -- # set +e
00:07:34.960   06:17:51	-- nvmf/common.sh@120 -- # for i in {1..20}
00:07:34.960   06:17:51	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:07:34.960  rmmod nvme_tcp
00:07:34.960  rmmod nvme_fabrics
00:07:34.960  rmmod nvme_keyring
00:07:34.960   06:17:51	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:07:34.960   06:17:51	-- nvmf/common.sh@123 -- # set -e
00:07:34.960   06:17:51	-- nvmf/common.sh@124 -- # return 0
00:07:34.960   06:17:51	-- nvmf/common.sh@477 -- # '[' -n 61609 ']'
00:07:34.960   06:17:51	-- nvmf/common.sh@478 -- # killprocess 61609
00:07:34.960   06:17:51	-- common/autotest_common.sh@936 -- # '[' -z 61609 ']'
00:07:34.960   06:17:51	-- common/autotest_common.sh@940 -- # kill -0 61609
00:07:34.960    06:17:51	-- common/autotest_common.sh@941 -- # uname
00:07:34.960   06:17:51	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:07:34.960    06:17:51	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61609
00:07:34.960   06:17:51	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:07:34.960  killing process with pid 61609
00:07:34.960   06:17:51	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:07:34.960   06:17:51	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 61609'
00:07:34.960   06:17:51	-- common/autotest_common.sh@955 -- # kill 61609
00:07:34.960   06:17:51	-- common/autotest_common.sh@960 -- # wait 61609
00:07:35.219   06:17:52	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:07:35.219   06:17:52	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:07:35.219   06:17:52	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:07:35.219   06:17:52	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:07:35.219   06:17:52	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:07:35.219   06:17:52	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:07:35.219   06:17:52	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:07:35.219    06:17:52	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:07:35.219   06:17:52	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:07:35.219  
00:07:35.219  real	0m3.586s
00:07:35.219  user	0m11.816s
00:07:35.219  sys	0m0.868s
00:07:35.219   06:17:52	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:35.219   06:17:52	-- common/autotest_common.sh@10 -- # set +x
00:07:35.219  ************************************
00:07:35.219  END TEST nvmf_referrals
00:07:35.219  ************************************
00:07:35.219   06:17:52	-- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp
00:07:35.219   06:17:52	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:07:35.219   06:17:52	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:35.219   06:17:52	-- common/autotest_common.sh@10 -- # set +x
00:07:35.219  ************************************
00:07:35.219  START TEST nvmf_connect_disconnect
00:07:35.219  ************************************
00:07:35.219   06:17:52	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp
00:07:35.478  * Looking for test storage...
00:07:35.479  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:07:35.479    06:17:52	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:07:35.479     06:17:52	-- common/autotest_common.sh@1690 -- # lcov --version
00:07:35.479     06:17:52	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:07:35.479    06:17:52	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:07:35.479    06:17:52	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:07:35.479    06:17:52	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:07:35.479    06:17:52	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:07:35.479    06:17:52	-- scripts/common.sh@335 -- # IFS=.-:
00:07:35.479    06:17:52	-- scripts/common.sh@335 -- # read -ra ver1
00:07:35.479    06:17:52	-- scripts/common.sh@336 -- # IFS=.-:
00:07:35.479    06:17:52	-- scripts/common.sh@336 -- # read -ra ver2
00:07:35.479    06:17:52	-- scripts/common.sh@337 -- # local 'op=<'
00:07:35.479    06:17:52	-- scripts/common.sh@339 -- # ver1_l=2
00:07:35.479    06:17:52	-- scripts/common.sh@340 -- # ver2_l=1
00:07:35.479    06:17:52	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:07:35.479    06:17:52	-- scripts/common.sh@343 -- # case "$op" in
00:07:35.479    06:17:52	-- scripts/common.sh@344 -- # : 1
00:07:35.479    06:17:52	-- scripts/common.sh@363 -- # (( v = 0 ))
00:07:35.479    06:17:52	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:35.479     06:17:52	-- scripts/common.sh@364 -- # decimal 1
00:07:35.479     06:17:52	-- scripts/common.sh@352 -- # local d=1
00:07:35.479     06:17:52	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:35.479     06:17:52	-- scripts/common.sh@354 -- # echo 1
00:07:35.479    06:17:52	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:35.479     06:17:52	-- scripts/common.sh@365 -- # decimal 2
00:07:35.479     06:17:52	-- scripts/common.sh@352 -- # local d=2
00:07:35.479     06:17:52	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:35.479     06:17:52	-- scripts/common.sh@354 -- # echo 2
00:07:35.479    06:17:52	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:35.479    06:17:52	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:35.479    06:17:52	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:35.479    06:17:52	-- scripts/common.sh@367 -- # return 0
00:07:35.479    06:17:52	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:35.479    06:17:52	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:35.479  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:35.479  		--rc genhtml_branch_coverage=1
00:07:35.479  		--rc genhtml_function_coverage=1
00:07:35.479  		--rc genhtml_legend=1
00:07:35.479  		--rc geninfo_all_blocks=1
00:07:35.479  		--rc geninfo_unexecuted_blocks=1
00:07:35.479  		
00:07:35.479  		'
00:07:35.479    06:17:52	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:35.479  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:35.479  		--rc genhtml_branch_coverage=1
00:07:35.479  		--rc genhtml_function_coverage=1
00:07:35.479  		--rc genhtml_legend=1
00:07:35.479  		--rc geninfo_all_blocks=1
00:07:35.479  		--rc geninfo_unexecuted_blocks=1
00:07:35.479  		
00:07:35.479  		'
00:07:35.479    06:17:52	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:35.479  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:35.479  		--rc genhtml_branch_coverage=1
00:07:35.479  		--rc genhtml_function_coverage=1
00:07:35.479  		--rc genhtml_legend=1
00:07:35.479  		--rc geninfo_all_blocks=1
00:07:35.479  		--rc geninfo_unexecuted_blocks=1
00:07:35.479  		
00:07:35.479  		'
00:07:35.479    06:17:52	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:35.479  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:35.479  		--rc genhtml_branch_coverage=1
00:07:35.479  		--rc genhtml_function_coverage=1
00:07:35.479  		--rc genhtml_legend=1
00:07:35.479  		--rc geninfo_all_blocks=1
00:07:35.479  		--rc geninfo_unexecuted_blocks=1
00:07:35.479  		
00:07:35.479  		'
00:07:35.479   06:17:52	-- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:07:35.479     06:17:52	-- nvmf/common.sh@7 -- # uname -s
00:07:35.479    06:17:52	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:07:35.479    06:17:52	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:07:35.479    06:17:52	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:07:35.479    06:17:52	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:07:35.479    06:17:52	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:07:35.479    06:17:52	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:07:35.479    06:17:52	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:07:35.479    06:17:52	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:07:35.479    06:17:52	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:07:35.479     06:17:52	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:07:35.479    06:17:52	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:07:35.479    06:17:52	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:07:35.479    06:17:52	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:07:35.479    06:17:52	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:07:35.479    06:17:52	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:07:35.479    06:17:52	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:07:35.479     06:17:52	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:07:35.479     06:17:52	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:07:35.479     06:17:52	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:07:35.479      06:17:52	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:35.479      06:17:52	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:35.479      06:17:52	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:35.479      06:17:52	-- paths/export.sh@5 -- # export PATH
00:07:35.479      06:17:52	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:35.479    06:17:52	-- nvmf/common.sh@46 -- # : 0
00:07:35.479    06:17:52	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:07:35.479    06:17:52	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:07:35.479    06:17:52	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:07:35.479    06:17:52	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:07:35.479    06:17:52	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:07:35.479    06:17:52	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:07:35.479    06:17:52	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:07:35.479    06:17:52	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:07:35.479   06:17:52	-- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64
00:07:35.479   06:17:52	-- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:07:35.479   06:17:52	-- target/connect_disconnect.sh@15 -- # nvmftestinit
00:07:35.479   06:17:52	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:07:35.479   06:17:52	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:07:35.479   06:17:52	-- nvmf/common.sh@436 -- # prepare_net_devs
00:07:35.479   06:17:52	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:07:35.479   06:17:52	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:07:35.479   06:17:52	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:07:35.479   06:17:52	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:07:35.479    06:17:52	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:07:35.479   06:17:52	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:07:35.479   06:17:52	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:07:35.479   06:17:52	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:07:35.479   06:17:52	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:07:35.479   06:17:52	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:07:35.479   06:17:52	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:07:35.479   06:17:52	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:07:35.479   06:17:52	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:07:35.479   06:17:52	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:07:35.479   06:17:52	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:07:35.479   06:17:52	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:07:35.479   06:17:52	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:07:35.479   06:17:52	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:07:35.479   06:17:52	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:07:35.479   06:17:52	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:07:35.479   06:17:52	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:07:35.479   06:17:52	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:07:35.479   06:17:52	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:07:35.479   06:17:52	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:07:35.479   06:17:52	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:07:35.479  Cannot find device "nvmf_tgt_br"
00:07:35.479   06:17:52	-- nvmf/common.sh@154 -- # true
00:07:35.479   06:17:52	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:07:35.479  Cannot find device "nvmf_tgt_br2"
00:07:35.479   06:17:52	-- nvmf/common.sh@155 -- # true
00:07:35.479   06:17:52	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:07:35.479   06:17:52	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:07:35.479  Cannot find device "nvmf_tgt_br"
00:07:35.479   06:17:52	-- nvmf/common.sh@157 -- # true
00:07:35.479   06:17:52	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:07:35.479  Cannot find device "nvmf_tgt_br2"
00:07:35.479   06:17:52	-- nvmf/common.sh@158 -- # true
00:07:35.479   06:17:52	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:07:35.479   06:17:52	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:07:35.479   06:17:52	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:07:35.480  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:07:35.480   06:17:52	-- nvmf/common.sh@161 -- # true
00:07:35.480   06:17:52	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:07:35.480  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:07:35.480   06:17:52	-- nvmf/common.sh@162 -- # true
00:07:35.480   06:17:52	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:07:35.480   06:17:52	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:07:35.480   06:17:52	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:07:35.738   06:17:52	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:07:35.738   06:17:52	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:07:35.738   06:17:52	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:07:35.738   06:17:52	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:07:35.738   06:17:52	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:07:35.738   06:17:52	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:07:35.738   06:17:52	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:07:35.738   06:17:52	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:07:35.738   06:17:52	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:07:35.738   06:17:52	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:07:35.738   06:17:52	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:07:35.738   06:17:52	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:07:35.738   06:17:52	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:07:35.738   06:17:52	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:07:35.738   06:17:52	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:07:35.738   06:17:52	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:07:35.738   06:17:52	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:07:35.738   06:17:52	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:07:35.738   06:17:52	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:07:35.738   06:17:52	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:07:35.738   06:17:52	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:07:35.738  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:07:35.738  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms
00:07:35.738  
00:07:35.738  --- 10.0.0.2 ping statistics ---
00:07:35.738  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:07:35.738  rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms
00:07:35.738   06:17:52	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:07:35.738  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:07:35.738  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms
00:07:35.738  
00:07:35.738  --- 10.0.0.3 ping statistics ---
00:07:35.738  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:07:35.738  rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms
00:07:35.738   06:17:52	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:07:35.738  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:07:35.738  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms
00:07:35.738  
00:07:35.738  --- 10.0.0.1 ping statistics ---
00:07:35.738  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:07:35.738  rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms
00:07:35.738   06:17:52	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:07:35.738   06:17:52	-- nvmf/common.sh@421 -- # return 0
00:07:35.739   06:17:52	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:07:35.739   06:17:52	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:07:35.739   06:17:52	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:07:35.739   06:17:52	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:07:35.739   06:17:52	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:07:35.739   06:17:52	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:07:35.739   06:17:52	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:07:35.739   06:17:52	-- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF
00:07:35.739   06:17:52	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:07:35.739   06:17:52	-- common/autotest_common.sh@722 -- # xtrace_disable
00:07:35.739   06:17:52	-- common/autotest_common.sh@10 -- # set +x
00:07:35.739   06:17:52	-- nvmf/common.sh@469 -- # nvmfpid=61924
00:07:35.739   06:17:52	-- nvmf/common.sh@470 -- # waitforlisten 61924
00:07:35.739   06:17:52	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:07:35.739   06:17:52	-- common/autotest_common.sh@829 -- # '[' -z 61924 ']'
00:07:35.739   06:17:52	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:35.739   06:17:52	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:35.739  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:35.739   06:17:52	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:35.739   06:17:52	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:35.739   06:17:52	-- common/autotest_common.sh@10 -- # set +x
00:07:35.997  [2024-12-16 06:17:52.718642] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:35.997  [2024-12-16 06:17:52.718732] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:07:35.997  [2024-12-16 06:17:52.860709] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:07:35.997  [2024-12-16 06:17:52.961331] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:35.997  [2024-12-16 06:17:52.961524] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:07:35.997  [2024-12-16 06:17:52.961541] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:07:35.997  [2024-12-16 06:17:52.961553] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:07:35.997  [2024-12-16 06:17:52.961725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:07:35.997  [2024-12-16 06:17:52.961874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:07:35.997  [2024-12-16 06:17:52.962340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:07:35.997  [2024-12-16 06:17:52.962375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:36.932   06:17:53	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:36.932   06:17:53	-- common/autotest_common.sh@862 -- # return 0
00:07:36.932   06:17:53	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:07:36.933   06:17:53	-- common/autotest_common.sh@728 -- # xtrace_disable
00:07:36.933   06:17:53	-- common/autotest_common.sh@10 -- # set +x
00:07:36.933   06:17:53	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:07:36.933   06:17:53	-- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0
00:07:36.933   06:17:53	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:36.933   06:17:53	-- common/autotest_common.sh@10 -- # set +x
00:07:36.933  [2024-12-16 06:17:53.752886] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:07:36.933   06:17:53	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:36.933    06:17:53	-- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512
00:07:36.933    06:17:53	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:36.933    06:17:53	-- common/autotest_common.sh@10 -- # set +x
00:07:36.933    06:17:53	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:36.933   06:17:53	-- target/connect_disconnect.sh@20 -- # bdev=Malloc0
00:07:36.933   06:17:53	-- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:07:36.933   06:17:53	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:36.933   06:17:53	-- common/autotest_common.sh@10 -- # set +x
00:07:36.933   06:17:53	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:36.933   06:17:53	-- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:07:36.933   06:17:53	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:36.933   06:17:53	-- common/autotest_common.sh@10 -- # set +x
00:07:36.933   06:17:53	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:36.933   06:17:53	-- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:07:36.933   06:17:53	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:36.933   06:17:53	-- common/autotest_common.sh@10 -- # set +x
00:07:36.933  [2024-12-16 06:17:53.823775] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:07:36.933   06:17:53	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:36.933   06:17:53	-- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']'
00:07:36.933   06:17:53	-- target/connect_disconnect.sh@27 -- # num_iterations=100
00:07:36.933   06:17:53	-- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8'
00:07:36.933   06:17:53	-- target/connect_disconnect.sh@34 -- # set +x
00:07:39.469  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:07:42.029  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:07:43.935  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:07:46.470  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:07:48.371  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:07:50.902  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:07:52.805  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:07:55.363  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:07:57.267  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:07:59.800  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:08:01.705  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:08:04.239  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:08:06.141  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:08:08.704  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:08:10.603  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:08:13.130  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:08:15.028  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:08:17.621  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:08:19.518  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:08:22.049  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:08:23.950  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:08:26.480  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:08:28.379  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:08:30.949  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:08:32.848  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:08:35.378  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:08:37.280  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:08:39.809  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:08:41.760  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:08:44.291  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:08:46.192  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:08:48.722  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:08:50.624  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:08:53.172  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:08:55.703  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:08:57.602  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:00.131  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:02.033  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:04.593  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:06.493  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:09.022  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:10.923  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:13.454  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:15.358  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:17.889  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:19.788  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:22.318  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:24.218  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:26.756  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:29.285  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:31.185  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:33.715  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:35.617  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:38.172  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:40.072  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:42.602  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:44.501  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:47.028  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:49.012  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:51.542  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:53.442  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:55.969  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:09:57.877  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:00.435  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:02.336  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:04.870  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:06.774  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:09.306  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:11.233  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:13.771  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:15.676  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:18.216  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:20.123  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:22.658  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:25.190  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:27.093  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:29.627  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:31.533  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:34.073  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:35.979  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:38.515  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:40.420  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:42.956  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:44.921  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:47.456  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:49.358  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:51.893  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:53.796  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:56.353  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:10:58.255  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:00.787  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:02.691  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:05.222  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:07.753  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:09.654  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:12.194  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:14.097  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:16.630  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:18.532  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:21.075  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:21.075   06:21:37	-- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT
00:11:21.075   06:21:37	-- target/connect_disconnect.sh@45 -- # nvmftestfini
00:11:21.075   06:21:37	-- nvmf/common.sh@476 -- # nvmfcleanup
00:11:21.075   06:21:37	-- nvmf/common.sh@116 -- # sync
00:11:21.075   06:21:37	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:11:21.075   06:21:37	-- nvmf/common.sh@119 -- # set +e
00:11:21.075   06:21:37	-- nvmf/common.sh@120 -- # for i in {1..20}
00:11:21.075   06:21:37	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:11:21.075  rmmod nvme_tcp
00:11:21.075  rmmod nvme_fabrics
00:11:21.075  rmmod nvme_keyring
00:11:21.075   06:21:37	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:11:21.075   06:21:37	-- nvmf/common.sh@123 -- # set -e
00:11:21.075   06:21:37	-- nvmf/common.sh@124 -- # return 0
00:11:21.075   06:21:37	-- nvmf/common.sh@477 -- # '[' -n 61924 ']'
00:11:21.075   06:21:37	-- nvmf/common.sh@478 -- # killprocess 61924
00:11:21.075   06:21:37	-- common/autotest_common.sh@936 -- # '[' -z 61924 ']'
00:11:21.075   06:21:37	-- common/autotest_common.sh@940 -- # kill -0 61924
00:11:21.075    06:21:37	-- common/autotest_common.sh@941 -- # uname
00:11:21.075   06:21:37	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:11:21.075    06:21:37	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61924
00:11:21.075   06:21:37	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:11:21.075  killing process with pid 61924
00:11:21.075   06:21:37	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:11:21.075   06:21:37	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 61924'
00:11:21.075   06:21:37	-- common/autotest_common.sh@955 -- # kill 61924
00:11:21.075   06:21:37	-- common/autotest_common.sh@960 -- # wait 61924
00:11:21.075   06:21:37	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:11:21.075   06:21:37	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:11:21.075   06:21:37	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:11:21.075   06:21:37	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:11:21.075   06:21:37	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:11:21.075   06:21:37	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:21.075   06:21:37	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:11:21.075    06:21:37	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:21.075   06:21:37	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:11:21.075  
00:11:21.075  real	3m45.823s
00:11:21.075  user	14m37.083s
00:11:21.075  sys	0m24.958s
00:11:21.075   06:21:37	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:21.075  ************************************
00:11:21.075  END TEST nvmf_connect_disconnect
00:11:21.075   06:21:37	-- common/autotest_common.sh@10 -- # set +x
00:11:21.075  ************************************
00:11:21.075   06:21:37	-- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp
00:11:21.075   06:21:37	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:11:21.075   06:21:37	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:21.075   06:21:37	-- common/autotest_common.sh@10 -- # set +x
00:11:21.075  ************************************
00:11:21.075  START TEST nvmf_multitarget
00:11:21.075  ************************************
00:11:21.076   06:21:38	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp
00:11:21.348  * Looking for test storage...
00:11:21.348  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:11:21.348    06:21:38	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:11:21.348     06:21:38	-- common/autotest_common.sh@1690 -- # lcov --version
00:11:21.348     06:21:38	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:11:21.348    06:21:38	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:11:21.348    06:21:38	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:11:21.348    06:21:38	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:11:21.348    06:21:38	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:11:21.348    06:21:38	-- scripts/common.sh@335 -- # IFS=.-:
00:11:21.348    06:21:38	-- scripts/common.sh@335 -- # read -ra ver1
00:11:21.348    06:21:38	-- scripts/common.sh@336 -- # IFS=.-:
00:11:21.348    06:21:38	-- scripts/common.sh@336 -- # read -ra ver2
00:11:21.348    06:21:38	-- scripts/common.sh@337 -- # local 'op=<'
00:11:21.348    06:21:38	-- scripts/common.sh@339 -- # ver1_l=2
00:11:21.348    06:21:38	-- scripts/common.sh@340 -- # ver2_l=1
00:11:21.348    06:21:38	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:11:21.348    06:21:38	-- scripts/common.sh@343 -- # case "$op" in
00:11:21.348    06:21:38	-- scripts/common.sh@344 -- # : 1
00:11:21.348    06:21:38	-- scripts/common.sh@363 -- # (( v = 0 ))
00:11:21.348    06:21:38	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:21.348     06:21:38	-- scripts/common.sh@364 -- # decimal 1
00:11:21.348     06:21:38	-- scripts/common.sh@352 -- # local d=1
00:11:21.348     06:21:38	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:21.348     06:21:38	-- scripts/common.sh@354 -- # echo 1
00:11:21.348    06:21:38	-- scripts/common.sh@364 -- # ver1[v]=1
00:11:21.348     06:21:38	-- scripts/common.sh@365 -- # decimal 2
00:11:21.348     06:21:38	-- scripts/common.sh@352 -- # local d=2
00:11:21.348     06:21:38	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:21.348     06:21:38	-- scripts/common.sh@354 -- # echo 2
00:11:21.348    06:21:38	-- scripts/common.sh@365 -- # ver2[v]=2
00:11:21.348    06:21:38	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:11:21.348    06:21:38	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:11:21.348    06:21:38	-- scripts/common.sh@367 -- # return 0
00:11:21.348    06:21:38	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:21.348    06:21:38	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:11:21.348  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:21.348  		--rc genhtml_branch_coverage=1
00:11:21.348  		--rc genhtml_function_coverage=1
00:11:21.348  		--rc genhtml_legend=1
00:11:21.348  		--rc geninfo_all_blocks=1
00:11:21.348  		--rc geninfo_unexecuted_blocks=1
00:11:21.348  		
00:11:21.348  		'
00:11:21.348    06:21:38	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:11:21.348  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:21.348  		--rc genhtml_branch_coverage=1
00:11:21.348  		--rc genhtml_function_coverage=1
00:11:21.348  		--rc genhtml_legend=1
00:11:21.348  		--rc geninfo_all_blocks=1
00:11:21.348  		--rc geninfo_unexecuted_blocks=1
00:11:21.348  		
00:11:21.348  		'
00:11:21.348    06:21:38	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:11:21.348  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:21.348  		--rc genhtml_branch_coverage=1
00:11:21.348  		--rc genhtml_function_coverage=1
00:11:21.348  		--rc genhtml_legend=1
00:11:21.348  		--rc geninfo_all_blocks=1
00:11:21.348  		--rc geninfo_unexecuted_blocks=1
00:11:21.348  		
00:11:21.348  		'
00:11:21.348    06:21:38	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:11:21.348  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:21.348  		--rc genhtml_branch_coverage=1
00:11:21.348  		--rc genhtml_function_coverage=1
00:11:21.348  		--rc genhtml_legend=1
00:11:21.348  		--rc geninfo_all_blocks=1
00:11:21.348  		--rc geninfo_unexecuted_blocks=1
00:11:21.348  		
00:11:21.348  		'
00:11:21.348   06:21:38	-- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:11:21.348     06:21:38	-- nvmf/common.sh@7 -- # uname -s
00:11:21.348    06:21:38	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:11:21.348    06:21:38	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:11:21.348    06:21:38	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:11:21.348    06:21:38	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:11:21.348    06:21:38	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:11:21.348    06:21:38	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:11:21.348    06:21:38	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:11:21.348    06:21:38	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:11:21.348    06:21:38	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:11:21.348     06:21:38	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:11:21.348    06:21:38	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:11:21.348    06:21:38	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:11:21.348    06:21:38	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:11:21.348    06:21:38	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:11:21.348    06:21:38	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:11:21.348    06:21:38	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:11:21.348     06:21:38	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:21.348     06:21:38	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:21.348     06:21:38	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:21.348      06:21:38	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:21.348      06:21:38	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:21.348      06:21:38	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:21.348      06:21:38	-- paths/export.sh@5 -- # export PATH
00:11:21.348      06:21:38	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:21.348    06:21:38	-- nvmf/common.sh@46 -- # : 0
00:11:21.348    06:21:38	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:11:21.348    06:21:38	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:11:21.348    06:21:38	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:11:21.348    06:21:38	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:11:21.348    06:21:38	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:11:21.348    06:21:38	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:11:21.348    06:21:38	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:11:21.348    06:21:38	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:11:21.348   06:21:38	-- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py
00:11:21.348   06:21:38	-- target/multitarget.sh@15 -- # nvmftestinit
00:11:21.348   06:21:38	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:11:21.349   06:21:38	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:11:21.349   06:21:38	-- nvmf/common.sh@436 -- # prepare_net_devs
00:11:21.349   06:21:38	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:11:21.349   06:21:38	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:11:21.349   06:21:38	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:21.349   06:21:38	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:11:21.349    06:21:38	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:21.349   06:21:38	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:11:21.349   06:21:38	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:11:21.349   06:21:38	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:11:21.349   06:21:38	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:11:21.349   06:21:38	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:11:21.349   06:21:38	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:11:21.349   06:21:38	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:11:21.349   06:21:38	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:11:21.349   06:21:38	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:11:21.349   06:21:38	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:11:21.349   06:21:38	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:11:21.349   06:21:38	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:11:21.349   06:21:38	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:11:21.349   06:21:38	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:11:21.349   06:21:38	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:11:21.349   06:21:38	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:11:21.349   06:21:38	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:11:21.349   06:21:38	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:11:21.349   06:21:38	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:11:21.349   06:21:38	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:11:21.349  Cannot find device "nvmf_tgt_br"
00:11:21.349   06:21:38	-- nvmf/common.sh@154 -- # true
00:11:21.349   06:21:38	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:11:21.349  Cannot find device "nvmf_tgt_br2"
00:11:21.349   06:21:38	-- nvmf/common.sh@155 -- # true
00:11:21.349   06:21:38	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:11:21.349   06:21:38	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:11:21.349  Cannot find device "nvmf_tgt_br"
00:11:21.349   06:21:38	-- nvmf/common.sh@157 -- # true
00:11:21.349   06:21:38	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:11:21.349  Cannot find device "nvmf_tgt_br2"
00:11:21.349   06:21:38	-- nvmf/common.sh@158 -- # true
00:11:21.349   06:21:38	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:11:21.349   06:21:38	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:11:21.608   06:21:38	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:11:21.608  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:11:21.608   06:21:38	-- nvmf/common.sh@161 -- # true
00:11:21.608   06:21:38	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:11:21.608  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:11:21.608   06:21:38	-- nvmf/common.sh@162 -- # true
00:11:21.608   06:21:38	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:11:21.608   06:21:38	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:11:21.608   06:21:38	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:11:21.608   06:21:38	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:11:21.608   06:21:38	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:11:21.608   06:21:38	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:11:21.608   06:21:38	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:11:21.608   06:21:38	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:11:21.608   06:21:38	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:11:21.608   06:21:38	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:11:21.608   06:21:38	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:11:21.608   06:21:38	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:11:21.608   06:21:38	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:11:21.608   06:21:38	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:11:21.608   06:21:38	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:11:21.608   06:21:38	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:11:21.608   06:21:38	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:11:21.608   06:21:38	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:11:21.608   06:21:38	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:11:21.608   06:21:38	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:11:21.608   06:21:38	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:11:21.608   06:21:38	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:11:21.608   06:21:38	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:11:21.608   06:21:38	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:11:21.608  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:11:21.608  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms
00:11:21.608  
00:11:21.608  --- 10.0.0.2 ping statistics ---
00:11:21.608  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:21.608  rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms
00:11:21.608   06:21:38	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:11:21.608  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:11:21.608  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms
00:11:21.608  
00:11:21.608  --- 10.0.0.3 ping statistics ---
00:11:21.608  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:21.608  rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms
00:11:21.608   06:21:38	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:11:21.608  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:11:21.608  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms
00:11:21.608  
00:11:21.608  --- 10.0.0.1 ping statistics ---
00:11:21.608  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:21.608  rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms
00:11:21.608   06:21:38	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:11:21.608   06:21:38	-- nvmf/common.sh@421 -- # return 0
00:11:21.608   06:21:38	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:11:21.608   06:21:38	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:11:21.608   06:21:38	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:11:21.608   06:21:38	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:11:21.608   06:21:38	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:11:21.608   06:21:38	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:11:21.608   06:21:38	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:11:21.608   06:21:38	-- target/multitarget.sh@16 -- # nvmfappstart -m 0xF
00:11:21.608   06:21:38	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:11:21.608   06:21:38	-- common/autotest_common.sh@722 -- # xtrace_disable
00:11:21.608   06:21:38	-- common/autotest_common.sh@10 -- # set +x
00:11:21.608   06:21:38	-- nvmf/common.sh@469 -- # nvmfpid=65711
00:11:21.608   06:21:38	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:11:21.608   06:21:38	-- nvmf/common.sh@470 -- # waitforlisten 65711
00:11:21.608   06:21:38	-- common/autotest_common.sh@829 -- # '[' -z 65711 ']'
00:11:21.608   06:21:38	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:21.608   06:21:38	-- common/autotest_common.sh@834 -- # local max_retries=100
00:11:21.608  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:21.608   06:21:38	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:21.608   06:21:38	-- common/autotest_common.sh@838 -- # xtrace_disable
00:11:21.608   06:21:38	-- common/autotest_common.sh@10 -- # set +x
00:11:21.867  [2024-12-16 06:21:38.588320] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:11:21.867  [2024-12-16 06:21:38.588413] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:21.867  [2024-12-16 06:21:38.721995] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:11:21.867  [2024-12-16 06:21:38.812511] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:11:21.867  [2024-12-16 06:21:38.812806] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:11:21.867  [2024-12-16 06:21:38.812864] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:11:21.867  [2024-12-16 06:21:38.812987] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:11:21.867  [2024-12-16 06:21:38.813157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:11:21.867  [2024-12-16 06:21:38.813286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:11:21.867  [2024-12-16 06:21:38.813808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:11:21.867  [2024-12-16 06:21:38.813817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:22.802   06:21:39	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:11:22.802   06:21:39	-- common/autotest_common.sh@862 -- # return 0
00:11:22.802   06:21:39	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:11:22.802   06:21:39	-- common/autotest_common.sh@728 -- # xtrace_disable
00:11:22.802   06:21:39	-- common/autotest_common.sh@10 -- # set +x
00:11:22.802   06:21:39	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:11:22.802   06:21:39	-- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT
00:11:22.802    06:21:39	-- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets
00:11:22.802    06:21:39	-- target/multitarget.sh@21 -- # jq length
00:11:22.802   06:21:39	-- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']'
00:11:22.803   06:21:39	-- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32
00:11:23.064  "nvmf_tgt_1"
00:11:23.064   06:21:39	-- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32
00:11:23.064  "nvmf_tgt_2"
00:11:23.064    06:21:39	-- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets
00:11:23.064    06:21:39	-- target/multitarget.sh@28 -- # jq length
00:11:23.322   06:21:40	-- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']'
00:11:23.322   06:21:40	-- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1
00:11:23.322  true
00:11:23.322   06:21:40	-- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2
00:11:23.581  true
00:11:23.581    06:21:40	-- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets
00:11:23.581    06:21:40	-- target/multitarget.sh@35 -- # jq length
00:11:23.840   06:21:40	-- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']'
00:11:23.840   06:21:40	-- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:11:23.840   06:21:40	-- target/multitarget.sh@41 -- # nvmftestfini
00:11:23.840   06:21:40	-- nvmf/common.sh@476 -- # nvmfcleanup
00:11:23.840   06:21:40	-- nvmf/common.sh@116 -- # sync
00:11:23.840   06:21:40	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:11:23.840   06:21:40	-- nvmf/common.sh@119 -- # set +e
00:11:23.840   06:21:40	-- nvmf/common.sh@120 -- # for i in {1..20}
00:11:23.840   06:21:40	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:11:23.840  rmmod nvme_tcp
00:11:23.840  rmmod nvme_fabrics
00:11:23.840  rmmod nvme_keyring
00:11:23.840   06:21:40	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:11:23.840   06:21:40	-- nvmf/common.sh@123 -- # set -e
00:11:23.840   06:21:40	-- nvmf/common.sh@124 -- # return 0
00:11:23.840   06:21:40	-- nvmf/common.sh@477 -- # '[' -n 65711 ']'
00:11:23.840   06:21:40	-- nvmf/common.sh@478 -- # killprocess 65711
00:11:23.840   06:21:40	-- common/autotest_common.sh@936 -- # '[' -z 65711 ']'
00:11:23.840   06:21:40	-- common/autotest_common.sh@940 -- # kill -0 65711
00:11:23.840    06:21:40	-- common/autotest_common.sh@941 -- # uname
00:11:23.840   06:21:40	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:11:23.840    06:21:40	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65711
00:11:23.840  killing process with pid 65711
00:11:23.840   06:21:40	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:11:23.840   06:21:40	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:11:23.840   06:21:40	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 65711'
00:11:23.840   06:21:40	-- common/autotest_common.sh@955 -- # kill 65711
00:11:23.840   06:21:40	-- common/autotest_common.sh@960 -- # wait 65711
00:11:24.098   06:21:40	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:11:24.098   06:21:40	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:11:24.098   06:21:40	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:11:24.098   06:21:40	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:11:24.098   06:21:40	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:11:24.098   06:21:40	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:24.098   06:21:40	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:11:24.098    06:21:40	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:24.098   06:21:40	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:11:24.098  
00:11:24.098  real	0m2.974s
00:11:24.098  user	0m9.724s
00:11:24.098  sys	0m0.698s
00:11:24.098   06:21:40	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:24.098  ************************************
00:11:24.098  END TEST nvmf_multitarget
00:11:24.098  ************************************
00:11:24.098   06:21:40	-- common/autotest_common.sh@10 -- # set +x
00:11:24.098   06:21:41	-- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp
00:11:24.098   06:21:41	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:11:24.098   06:21:41	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:24.098   06:21:41	-- common/autotest_common.sh@10 -- # set +x
00:11:24.098  ************************************
00:11:24.098  START TEST nvmf_rpc
00:11:24.098  ************************************
00:11:24.098   06:21:41	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp
00:11:24.357  * Looking for test storage...
00:11:24.357  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:11:24.357    06:21:41	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:11:24.357     06:21:41	-- common/autotest_common.sh@1690 -- # lcov --version
00:11:24.357     06:21:41	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:11:24.357    06:21:41	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:11:24.357    06:21:41	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:11:24.357    06:21:41	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:11:24.357    06:21:41	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:11:24.357    06:21:41	-- scripts/common.sh@335 -- # IFS=.-:
00:11:24.357    06:21:41	-- scripts/common.sh@335 -- # read -ra ver1
00:11:24.357    06:21:41	-- scripts/common.sh@336 -- # IFS=.-:
00:11:24.357    06:21:41	-- scripts/common.sh@336 -- # read -ra ver2
00:11:24.357    06:21:41	-- scripts/common.sh@337 -- # local 'op=<'
00:11:24.357    06:21:41	-- scripts/common.sh@339 -- # ver1_l=2
00:11:24.357    06:21:41	-- scripts/common.sh@340 -- # ver2_l=1
00:11:24.357    06:21:41	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:11:24.357    06:21:41	-- scripts/common.sh@343 -- # case "$op" in
00:11:24.357    06:21:41	-- scripts/common.sh@344 -- # : 1
00:11:24.357    06:21:41	-- scripts/common.sh@363 -- # (( v = 0 ))
00:11:24.357    06:21:41	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:24.357     06:21:41	-- scripts/common.sh@364 -- # decimal 1
00:11:24.357     06:21:41	-- scripts/common.sh@352 -- # local d=1
00:11:24.357     06:21:41	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:24.357     06:21:41	-- scripts/common.sh@354 -- # echo 1
00:11:24.357    06:21:41	-- scripts/common.sh@364 -- # ver1[v]=1
00:11:24.357     06:21:41	-- scripts/common.sh@365 -- # decimal 2
00:11:24.357     06:21:41	-- scripts/common.sh@352 -- # local d=2
00:11:24.357     06:21:41	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:24.357     06:21:41	-- scripts/common.sh@354 -- # echo 2
00:11:24.357    06:21:41	-- scripts/common.sh@365 -- # ver2[v]=2
00:11:24.357    06:21:41	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:11:24.357    06:21:41	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:11:24.357    06:21:41	-- scripts/common.sh@367 -- # return 0
00:11:24.357    06:21:41	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:24.357    06:21:41	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:11:24.357  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:24.357  		--rc genhtml_branch_coverage=1
00:11:24.357  		--rc genhtml_function_coverage=1
00:11:24.357  		--rc genhtml_legend=1
00:11:24.357  		--rc geninfo_all_blocks=1
00:11:24.357  		--rc geninfo_unexecuted_blocks=1
00:11:24.357  		
00:11:24.357  		'
00:11:24.357    06:21:41	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:11:24.357  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:24.357  		--rc genhtml_branch_coverage=1
00:11:24.357  		--rc genhtml_function_coverage=1
00:11:24.357  		--rc genhtml_legend=1
00:11:24.357  		--rc geninfo_all_blocks=1
00:11:24.357  		--rc geninfo_unexecuted_blocks=1
00:11:24.357  		
00:11:24.357  		'
00:11:24.357    06:21:41	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:11:24.357  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:24.357  		--rc genhtml_branch_coverage=1
00:11:24.357  		--rc genhtml_function_coverage=1
00:11:24.357  		--rc genhtml_legend=1
00:11:24.357  		--rc geninfo_all_blocks=1
00:11:24.357  		--rc geninfo_unexecuted_blocks=1
00:11:24.357  		
00:11:24.357  		'
00:11:24.357    06:21:41	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:11:24.357  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:24.357  		--rc genhtml_branch_coverage=1
00:11:24.357  		--rc genhtml_function_coverage=1
00:11:24.357  		--rc genhtml_legend=1
00:11:24.357  		--rc geninfo_all_blocks=1
00:11:24.357  		--rc geninfo_unexecuted_blocks=1
00:11:24.357  		
00:11:24.357  		'
00:11:24.358   06:21:41	-- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:11:24.358     06:21:41	-- nvmf/common.sh@7 -- # uname -s
00:11:24.358    06:21:41	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:11:24.358    06:21:41	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:11:24.358    06:21:41	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:11:24.358    06:21:41	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:11:24.358    06:21:41	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:11:24.358    06:21:41	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:11:24.358    06:21:41	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:11:24.358    06:21:41	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:11:24.358    06:21:41	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:11:24.358     06:21:41	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:11:24.358    06:21:41	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:11:24.358    06:21:41	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:11:24.358    06:21:41	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:11:24.358    06:21:41	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:11:24.358    06:21:41	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:11:24.358    06:21:41	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:11:24.358     06:21:41	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:24.358     06:21:41	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:24.358     06:21:41	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:24.358      06:21:41	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:24.358      06:21:41	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:24.358      06:21:41	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:24.358      06:21:41	-- paths/export.sh@5 -- # export PATH
00:11:24.358      06:21:41	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:24.358    06:21:41	-- nvmf/common.sh@46 -- # : 0
00:11:24.358    06:21:41	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:11:24.358    06:21:41	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:11:24.358    06:21:41	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:11:24.358    06:21:41	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:11:24.358    06:21:41	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:11:24.358    06:21:41	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:11:24.358    06:21:41	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:11:24.358    06:21:41	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:11:24.358   06:21:41	-- target/rpc.sh@11 -- # loops=5
00:11:24.358   06:21:41	-- target/rpc.sh@23 -- # nvmftestinit
00:11:24.358   06:21:41	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:11:24.358   06:21:41	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:11:24.358   06:21:41	-- nvmf/common.sh@436 -- # prepare_net_devs
00:11:24.358   06:21:41	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:11:24.358   06:21:41	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:11:24.358   06:21:41	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:24.358   06:21:41	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:11:24.358    06:21:41	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:24.358   06:21:41	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:11:24.358   06:21:41	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:11:24.358   06:21:41	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:11:24.358   06:21:41	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:11:24.358   06:21:41	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:11:24.358   06:21:41	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:11:24.358   06:21:41	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:11:24.358   06:21:41	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:11:24.358   06:21:41	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:11:24.358   06:21:41	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:11:24.358   06:21:41	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:11:24.358   06:21:41	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:11:24.358   06:21:41	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:11:24.358   06:21:41	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:11:24.358   06:21:41	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:11:24.358   06:21:41	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:11:24.358   06:21:41	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:11:24.358   06:21:41	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:11:24.358   06:21:41	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:11:24.358   06:21:41	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:11:24.358  Cannot find device "nvmf_tgt_br"
00:11:24.358   06:21:41	-- nvmf/common.sh@154 -- # true
00:11:24.358   06:21:41	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:11:24.358  Cannot find device "nvmf_tgt_br2"
00:11:24.358   06:21:41	-- nvmf/common.sh@155 -- # true
00:11:24.358   06:21:41	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:11:24.358   06:21:41	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:11:24.358  Cannot find device "nvmf_tgt_br"
00:11:24.358   06:21:41	-- nvmf/common.sh@157 -- # true
00:11:24.358   06:21:41	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:11:24.358  Cannot find device "nvmf_tgt_br2"
00:11:24.358   06:21:41	-- nvmf/common.sh@158 -- # true
00:11:24.358   06:21:41	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:11:24.358   06:21:41	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:11:24.358   06:21:41	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:11:24.358  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:11:24.358   06:21:41	-- nvmf/common.sh@161 -- # true
00:11:24.358   06:21:41	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:11:24.617  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:11:24.617   06:21:41	-- nvmf/common.sh@162 -- # true
00:11:24.617   06:21:41	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:11:24.617   06:21:41	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:11:24.617   06:21:41	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:11:24.617   06:21:41	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:11:24.617   06:21:41	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:11:24.617   06:21:41	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:11:24.617   06:21:41	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:11:24.617   06:21:41	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:11:24.617   06:21:41	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:11:24.617   06:21:41	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:11:24.617   06:21:41	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:11:24.617   06:21:41	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:11:24.617   06:21:41	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:11:24.617   06:21:41	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:11:24.617   06:21:41	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:11:24.617   06:21:41	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:11:24.617   06:21:41	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:11:24.617   06:21:41	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:11:24.617   06:21:41	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:11:24.617   06:21:41	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:11:24.617   06:21:41	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:11:24.617   06:21:41	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:11:24.617   06:21:41	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:11:24.617   06:21:41	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:11:24.617  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:11:24.617  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms
00:11:24.617  
00:11:24.617  --- 10.0.0.2 ping statistics ---
00:11:24.617  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:24.617  rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms
00:11:24.617   06:21:41	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:11:24.617  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:11:24.617  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms
00:11:24.617  
00:11:24.617  --- 10.0.0.3 ping statistics ---
00:11:24.617  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:24.617  rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms
00:11:24.617   06:21:41	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:11:24.617  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:11:24.617  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms
00:11:24.617  
00:11:24.617  --- 10.0.0.1 ping statistics ---
00:11:24.617  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:24.617  rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms
00:11:24.617   06:21:41	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:11:24.617   06:21:41	-- nvmf/common.sh@421 -- # return 0
00:11:24.617   06:21:41	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:11:24.617   06:21:41	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:11:24.617   06:21:41	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:11:24.617   06:21:41	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:11:24.617   06:21:41	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:11:24.617   06:21:41	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:11:24.617   06:21:41	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:11:24.617   06:21:41	-- target/rpc.sh@24 -- # nvmfappstart -m 0xF
00:11:24.617   06:21:41	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:11:24.617   06:21:41	-- common/autotest_common.sh@722 -- # xtrace_disable
00:11:24.617   06:21:41	-- common/autotest_common.sh@10 -- # set +x
00:11:24.617   06:21:41	-- nvmf/common.sh@469 -- # nvmfpid=65951
00:11:24.617   06:21:41	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:11:24.617   06:21:41	-- nvmf/common.sh@470 -- # waitforlisten 65951
00:11:24.617   06:21:41	-- common/autotest_common.sh@829 -- # '[' -z 65951 ']'
00:11:24.617   06:21:41	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:24.617   06:21:41	-- common/autotest_common.sh@834 -- # local max_retries=100
00:11:24.617   06:21:41	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:24.617  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:24.617   06:21:41	-- common/autotest_common.sh@838 -- # xtrace_disable
00:11:24.617   06:21:41	-- common/autotest_common.sh@10 -- # set +x
00:11:24.876  [2024-12-16 06:21:41.618588] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:11:24.876  [2024-12-16 06:21:41.619152] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:24.876  [2024-12-16 06:21:41.757418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:11:24.876  [2024-12-16 06:21:41.846567] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:11:24.876  [2024-12-16 06:21:41.846937] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:11:24.876  [2024-12-16 06:21:41.847050] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:11:24.876  [2024-12-16 06:21:41.847182] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:11:24.876  [2024-12-16 06:21:41.847396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:11:24.876  [2024-12-16 06:21:41.847533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:11:24.876  [2024-12-16 06:21:41.847935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:11:24.876  [2024-12-16 06:21:41.847946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:25.812   06:21:42	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:11:25.813   06:21:42	-- common/autotest_common.sh@862 -- # return 0
00:11:25.813   06:21:42	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:11:25.813   06:21:42	-- common/autotest_common.sh@728 -- # xtrace_disable
00:11:25.813   06:21:42	-- common/autotest_common.sh@10 -- # set +x
00:11:25.813   06:21:42	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:11:25.813    06:21:42	-- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats
00:11:25.813    06:21:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:25.813    06:21:42	-- common/autotest_common.sh@10 -- # set +x
00:11:25.813    06:21:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:25.813   06:21:42	-- target/rpc.sh@26 -- # stats='{
00:11:25.813  "poll_groups": [
00:11:25.813  {
00:11:25.813  "admin_qpairs": 0,
00:11:25.813  "completed_nvme_io": 0,
00:11:25.813  "current_admin_qpairs": 0,
00:11:25.813  "current_io_qpairs": 0,
00:11:25.813  "io_qpairs": 0,
00:11:25.813  "name": "nvmf_tgt_poll_group_0",
00:11:25.813  "pending_bdev_io": 0,
00:11:25.813  "transports": []
00:11:25.813  },
00:11:25.813  {
00:11:25.813  "admin_qpairs": 0,
00:11:25.813  "completed_nvme_io": 0,
00:11:25.813  "current_admin_qpairs": 0,
00:11:25.813  "current_io_qpairs": 0,
00:11:25.813  "io_qpairs": 0,
00:11:25.813  "name": "nvmf_tgt_poll_group_1",
00:11:25.813  "pending_bdev_io": 0,
00:11:25.813  "transports": []
00:11:25.813  },
00:11:25.813  {
00:11:25.813  "admin_qpairs": 0,
00:11:25.813  "completed_nvme_io": 0,
00:11:25.813  "current_admin_qpairs": 0,
00:11:25.813  "current_io_qpairs": 0,
00:11:25.813  "io_qpairs": 0,
00:11:25.813  "name": "nvmf_tgt_poll_group_2",
00:11:25.813  "pending_bdev_io": 0,
00:11:25.813  "transports": []
00:11:25.813  },
00:11:25.813  {
00:11:25.813  "admin_qpairs": 0,
00:11:25.813  "completed_nvme_io": 0,
00:11:25.813  "current_admin_qpairs": 0,
00:11:25.813  "current_io_qpairs": 0,
00:11:25.813  "io_qpairs": 0,
00:11:25.813  "name": "nvmf_tgt_poll_group_3",
00:11:25.813  "pending_bdev_io": 0,
00:11:25.813  "transports": []
00:11:25.813  }
00:11:25.813  ],
00:11:25.813  "tick_rate": 2200000000
00:11:25.813  }'
00:11:25.813    06:21:42	-- target/rpc.sh@28 -- # jcount '.poll_groups[].name'
00:11:25.813    06:21:42	-- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name'
00:11:25.813    06:21:42	-- target/rpc.sh@15 -- # jq '.poll_groups[].name'
00:11:25.813    06:21:42	-- target/rpc.sh@15 -- # wc -l
00:11:25.813   06:21:42	-- target/rpc.sh@28 -- # (( 4 == 4 ))
00:11:26.072    06:21:42	-- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]'
00:11:26.072   06:21:42	-- target/rpc.sh@29 -- # [[ null == null ]]
00:11:26.072   06:21:42	-- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:11:26.072   06:21:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:26.072   06:21:42	-- common/autotest_common.sh@10 -- # set +x
00:11:26.072  [2024-12-16 06:21:42.844398] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:11:26.072   06:21:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:26.072    06:21:42	-- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats
00:11:26.072    06:21:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:26.072    06:21:42	-- common/autotest_common.sh@10 -- # set +x
00:11:26.072    06:21:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:26.072   06:21:42	-- target/rpc.sh@33 -- # stats='{
00:11:26.072  "poll_groups": [
00:11:26.072  {
00:11:26.072  "admin_qpairs": 0,
00:11:26.072  "completed_nvme_io": 0,
00:11:26.072  "current_admin_qpairs": 0,
00:11:26.072  "current_io_qpairs": 0,
00:11:26.072  "io_qpairs": 0,
00:11:26.072  "name": "nvmf_tgt_poll_group_0",
00:11:26.072  "pending_bdev_io": 0,
00:11:26.072  "transports": [
00:11:26.072  {
00:11:26.072  "trtype": "TCP"
00:11:26.072  }
00:11:26.072  ]
00:11:26.072  },
00:11:26.072  {
00:11:26.072  "admin_qpairs": 0,
00:11:26.072  "completed_nvme_io": 0,
00:11:26.072  "current_admin_qpairs": 0,
00:11:26.072  "current_io_qpairs": 0,
00:11:26.072  "io_qpairs": 0,
00:11:26.072  "name": "nvmf_tgt_poll_group_1",
00:11:26.072  "pending_bdev_io": 0,
00:11:26.072  "transports": [
00:11:26.072  {
00:11:26.072  "trtype": "TCP"
00:11:26.072  }
00:11:26.072  ]
00:11:26.072  },
00:11:26.072  {
00:11:26.072  "admin_qpairs": 0,
00:11:26.072  "completed_nvme_io": 0,
00:11:26.072  "current_admin_qpairs": 0,
00:11:26.072  "current_io_qpairs": 0,
00:11:26.072  "io_qpairs": 0,
00:11:26.072  "name": "nvmf_tgt_poll_group_2",
00:11:26.072  "pending_bdev_io": 0,
00:11:26.072  "transports": [
00:11:26.072  {
00:11:26.072  "trtype": "TCP"
00:11:26.072  }
00:11:26.072  ]
00:11:26.072  },
00:11:26.072  {
00:11:26.072  "admin_qpairs": 0,
00:11:26.072  "completed_nvme_io": 0,
00:11:26.072  "current_admin_qpairs": 0,
00:11:26.072  "current_io_qpairs": 0,
00:11:26.072  "io_qpairs": 0,
00:11:26.072  "name": "nvmf_tgt_poll_group_3",
00:11:26.072  "pending_bdev_io": 0,
00:11:26.072  "transports": [
00:11:26.072  {
00:11:26.072  "trtype": "TCP"
00:11:26.072  }
00:11:26.072  ]
00:11:26.072  }
00:11:26.072  ],
00:11:26.072  "tick_rate": 2200000000
00:11:26.072  }'
00:11:26.072    06:21:42	-- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs'
00:11:26.072    06:21:42	-- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs'
00:11:26.072    06:21:42	-- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}'
00:11:26.072    06:21:42	-- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs'
00:11:26.072   06:21:42	-- target/rpc.sh@35 -- # (( 0 == 0 ))
00:11:26.072    06:21:42	-- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs'
00:11:26.072    06:21:42	-- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs'
00:11:26.072    06:21:42	-- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs'
00:11:26.072    06:21:42	-- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}'
00:11:26.072   06:21:42	-- target/rpc.sh@36 -- # (( 0 == 0 ))
00:11:26.072   06:21:42	-- target/rpc.sh@38 -- # '[' rdma == tcp ']'
00:11:26.072   06:21:42	-- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64
00:11:26.072   06:21:42	-- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512
00:11:26.072   06:21:42	-- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1
00:11:26.072   06:21:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:26.072   06:21:42	-- common/autotest_common.sh@10 -- # set +x
00:11:26.072  Malloc1
00:11:26.072   06:21:43	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:26.072   06:21:43	-- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:11:26.072   06:21:43	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:26.072   06:21:43	-- common/autotest_common.sh@10 -- # set +x
00:11:26.072   06:21:43	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:26.072   06:21:43	-- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:11:26.072   06:21:43	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:26.072   06:21:43	-- common/autotest_common.sh@10 -- # set +x
00:11:26.072   06:21:43	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:26.073   06:21:43	-- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1
00:11:26.073   06:21:43	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:26.073   06:21:43	-- common/autotest_common.sh@10 -- # set +x
00:11:26.332   06:21:43	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:26.332   06:21:43	-- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:26.332   06:21:43	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:26.332   06:21:43	-- common/autotest_common.sh@10 -- # set +x
00:11:26.332  [2024-12-16 06:21:43.056965] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:26.332   06:21:43	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:26.332   06:21:43	-- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e -a 10.0.0.2 -s 4420
00:11:26.332   06:21:43	-- common/autotest_common.sh@650 -- # local es=0
00:11:26.332   06:21:43	-- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e -a 10.0.0.2 -s 4420
00:11:26.332   06:21:43	-- common/autotest_common.sh@638 -- # local arg=nvme
00:11:26.332   06:21:43	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:11:26.332    06:21:43	-- common/autotest_common.sh@642 -- # type -t nvme
00:11:26.332   06:21:43	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:11:26.332    06:21:43	-- common/autotest_common.sh@644 -- # type -P nvme
00:11:26.332   06:21:43	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:11:26.332   06:21:43	-- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme
00:11:26.332   06:21:43	-- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]]
00:11:26.332   06:21:43	-- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e -a 10.0.0.2 -s 4420
00:11:26.332  [2024-12-16 06:21:43.085315] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e'
00:11:26.332  Failed to write to /dev/nvme-fabrics: Input/output error
00:11:26.332  could not add new controller: failed to write to nvme-fabrics device
00:11:26.332   06:21:43	-- common/autotest_common.sh@653 -- # es=1
00:11:26.332   06:21:43	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:11:26.332   06:21:43	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:11:26.332   06:21:43	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:11:26.332   06:21:43	-- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:11:26.332   06:21:43	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:26.332   06:21:43	-- common/autotest_common.sh@10 -- # set +x
00:11:26.332   06:21:43	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:26.332   06:21:43	-- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:11:26.332   06:21:43	-- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME
00:11:26.332   06:21:43	-- common/autotest_common.sh@1187 -- # local i=0
00:11:26.332   06:21:43	-- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0
00:11:26.332   06:21:43	-- common/autotest_common.sh@1189 -- # [[ -n '' ]]
00:11:26.332   06:21:43	-- common/autotest_common.sh@1194 -- # sleep 2
00:11:28.867   06:21:45	-- common/autotest_common.sh@1195 -- # (( i++ <= 15 ))
00:11:28.867    06:21:45	-- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL
00:11:28.867    06:21:45	-- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME
00:11:28.867   06:21:45	-- common/autotest_common.sh@1196 -- # nvme_devices=1
00:11:28.867   06:21:45	-- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter ))
00:11:28.867   06:21:45	-- common/autotest_common.sh@1197 -- # return 0
00:11:28.867   06:21:45	-- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:11:28.867  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:28.867   06:21:45	-- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:11:28.867   06:21:45	-- common/autotest_common.sh@1208 -- # local i=0
00:11:28.867   06:21:45	-- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:28.867   06:21:45	-- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL
00:11:28.867   06:21:45	-- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL
00:11:28.867   06:21:45	-- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:28.867   06:21:45	-- common/autotest_common.sh@1220 -- # return 0
00:11:28.867   06:21:45	-- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:11:28.867   06:21:45	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:28.867   06:21:45	-- common/autotest_common.sh@10 -- # set +x
00:11:28.867   06:21:45	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:28.867   06:21:45	-- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:11:28.867   06:21:45	-- common/autotest_common.sh@650 -- # local es=0
00:11:28.867   06:21:45	-- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:11:28.867   06:21:45	-- common/autotest_common.sh@638 -- # local arg=nvme
00:11:28.867   06:21:45	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:11:28.867    06:21:45	-- common/autotest_common.sh@642 -- # type -t nvme
00:11:28.867   06:21:45	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:11:28.867    06:21:45	-- common/autotest_common.sh@644 -- # type -P nvme
00:11:28.867   06:21:45	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:11:28.867   06:21:45	-- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme
00:11:28.867   06:21:45	-- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]]
00:11:28.867   06:21:45	-- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:11:28.867  [2024-12-16 06:21:45.386438] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e'
00:11:28.867  Failed to write to /dev/nvme-fabrics: Input/output error
00:11:28.867  could not add new controller: failed to write to nvme-fabrics device
00:11:28.867   06:21:45	-- common/autotest_common.sh@653 -- # es=1
00:11:28.867   06:21:45	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:11:28.867   06:21:45	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:11:28.867   06:21:45	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:11:28.867   06:21:45	-- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1
00:11:28.867   06:21:45	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:28.867   06:21:45	-- common/autotest_common.sh@10 -- # set +x
00:11:28.867   06:21:45	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:28.867   06:21:45	-- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:11:28.867   06:21:45	-- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME
00:11:28.867   06:21:45	-- common/autotest_common.sh@1187 -- # local i=0
00:11:28.867   06:21:45	-- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0
00:11:28.867   06:21:45	-- common/autotest_common.sh@1189 -- # [[ -n '' ]]
00:11:28.867   06:21:45	-- common/autotest_common.sh@1194 -- # sleep 2
00:11:30.769   06:21:47	-- common/autotest_common.sh@1195 -- # (( i++ <= 15 ))
00:11:30.769    06:21:47	-- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL
00:11:30.769    06:21:47	-- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME
00:11:30.769   06:21:47	-- common/autotest_common.sh@1196 -- # nvme_devices=1
00:11:30.769   06:21:47	-- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter ))
00:11:30.769   06:21:47	-- common/autotest_common.sh@1197 -- # return 0
00:11:30.769   06:21:47	-- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:11:30.769  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:30.769   06:21:47	-- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:11:30.769   06:21:47	-- common/autotest_common.sh@1208 -- # local i=0
00:11:30.769   06:21:47	-- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL
00:11:30.769   06:21:47	-- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:30.769   06:21:47	-- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL
00:11:30.769   06:21:47	-- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:30.769   06:21:47	-- common/autotest_common.sh@1220 -- # return 0
00:11:30.769   06:21:47	-- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:30.769   06:21:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:30.769   06:21:47	-- common/autotest_common.sh@10 -- # set +x
00:11:30.769   06:21:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:30.769    06:21:47	-- target/rpc.sh@81 -- # seq 1 5
00:11:30.769   06:21:47	-- target/rpc.sh@81 -- # for i in $(seq 1 $loops)
00:11:30.769   06:21:47	-- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:11:30.769   06:21:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:30.769   06:21:47	-- common/autotest_common.sh@10 -- # set +x
00:11:30.769   06:21:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:30.769   06:21:47	-- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:30.769   06:21:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:30.769   06:21:47	-- common/autotest_common.sh@10 -- # set +x
00:11:30.769  [2024-12-16 06:21:47.683057] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:30.769   06:21:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:30.769   06:21:47	-- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5
00:11:30.769   06:21:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:30.769   06:21:47	-- common/autotest_common.sh@10 -- # set +x
00:11:30.769   06:21:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:30.769   06:21:47	-- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:11:30.769   06:21:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:30.769   06:21:47	-- common/autotest_common.sh@10 -- # set +x
00:11:30.769   06:21:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:30.769   06:21:47	-- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:11:31.028   06:21:47	-- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME
00:11:31.028   06:21:47	-- common/autotest_common.sh@1187 -- # local i=0
00:11:31.028   06:21:47	-- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0
00:11:31.028   06:21:47	-- common/autotest_common.sh@1189 -- # [[ -n '' ]]
00:11:31.028   06:21:47	-- common/autotest_common.sh@1194 -- # sleep 2
00:11:32.930   06:21:49	-- common/autotest_common.sh@1195 -- # (( i++ <= 15 ))
00:11:32.930    06:21:49	-- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL
00:11:32.930    06:21:49	-- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME
00:11:32.930   06:21:49	-- common/autotest_common.sh@1196 -- # nvme_devices=1
00:11:32.930   06:21:49	-- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter ))
00:11:32.930   06:21:49	-- common/autotest_common.sh@1197 -- # return 0
00:11:32.930   06:21:49	-- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:11:33.189  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:33.189   06:21:49	-- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:11:33.189   06:21:49	-- common/autotest_common.sh@1208 -- # local i=0
00:11:33.189   06:21:49	-- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL
00:11:33.189   06:21:49	-- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:33.189   06:21:49	-- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:33.189   06:21:49	-- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL
00:11:33.189   06:21:49	-- common/autotest_common.sh@1220 -- # return 0
00:11:33.189   06:21:49	-- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:11:33.189   06:21:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:33.189   06:21:49	-- common/autotest_common.sh@10 -- # set +x
00:11:33.189   06:21:49	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:33.189   06:21:49	-- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:33.189   06:21:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:33.189   06:21:49	-- common/autotest_common.sh@10 -- # set +x
00:11:33.189   06:21:49	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:33.189   06:21:49	-- target/rpc.sh@81 -- # for i in $(seq 1 $loops)
00:11:33.189   06:21:49	-- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:11:33.189   06:21:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:33.189   06:21:49	-- common/autotest_common.sh@10 -- # set +x
00:11:33.189   06:21:49	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:33.189   06:21:49	-- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:33.189   06:21:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:33.189   06:21:49	-- common/autotest_common.sh@10 -- # set +x
00:11:33.189  [2024-12-16 06:21:49.990073] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:33.189   06:21:49	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:33.189   06:21:49	-- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5
00:11:33.189   06:21:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:33.189   06:21:49	-- common/autotest_common.sh@10 -- # set +x
00:11:33.189   06:21:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:33.189   06:21:50	-- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:11:33.189   06:21:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:33.189   06:21:50	-- common/autotest_common.sh@10 -- # set +x
00:11:33.189   06:21:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:33.189   06:21:50	-- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:11:33.448   06:21:50	-- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME
00:11:33.448   06:21:50	-- common/autotest_common.sh@1187 -- # local i=0
00:11:33.448   06:21:50	-- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0
00:11:33.448   06:21:50	-- common/autotest_common.sh@1189 -- # [[ -n '' ]]
00:11:33.448   06:21:50	-- common/autotest_common.sh@1194 -- # sleep 2
00:11:35.348   06:21:52	-- common/autotest_common.sh@1195 -- # (( i++ <= 15 ))
00:11:35.348    06:21:52	-- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL
00:11:35.348    06:21:52	-- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME
00:11:35.348   06:21:52	-- common/autotest_common.sh@1196 -- # nvme_devices=1
00:11:35.348   06:21:52	-- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter ))
00:11:35.348   06:21:52	-- common/autotest_common.sh@1197 -- # return 0
00:11:35.348   06:21:52	-- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:11:35.348  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:35.348   06:21:52	-- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:11:35.348   06:21:52	-- common/autotest_common.sh@1208 -- # local i=0
00:11:35.348   06:21:52	-- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL
00:11:35.348   06:21:52	-- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:35.348   06:21:52	-- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL
00:11:35.348   06:21:52	-- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:35.348   06:21:52	-- common/autotest_common.sh@1220 -- # return 0
00:11:35.348   06:21:52	-- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:11:35.348   06:21:52	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:35.348   06:21:52	-- common/autotest_common.sh@10 -- # set +x
00:11:35.348   06:21:52	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:35.348   06:21:52	-- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:35.348   06:21:52	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:35.348   06:21:52	-- common/autotest_common.sh@10 -- # set +x
00:11:35.348   06:21:52	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:35.348   06:21:52	-- target/rpc.sh@81 -- # for i in $(seq 1 $loops)
00:11:35.348   06:21:52	-- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:11:35.348   06:21:52	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:35.348   06:21:52	-- common/autotest_common.sh@10 -- # set +x
00:11:35.348   06:21:52	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:35.348   06:21:52	-- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:35.348   06:21:52	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:35.348   06:21:52	-- common/autotest_common.sh@10 -- # set +x
00:11:35.348  [2024-12-16 06:21:52.289492] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:35.348   06:21:52	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:35.348   06:21:52	-- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5
00:11:35.348   06:21:52	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:35.348   06:21:52	-- common/autotest_common.sh@10 -- # set +x
00:11:35.348   06:21:52	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:35.348   06:21:52	-- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:11:35.348   06:21:52	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:35.348   06:21:52	-- common/autotest_common.sh@10 -- # set +x
00:11:35.348   06:21:52	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:35.348   06:21:52	-- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:11:35.607   06:21:52	-- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME
00:11:35.607   06:21:52	-- common/autotest_common.sh@1187 -- # local i=0
00:11:35.607   06:21:52	-- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0
00:11:35.607   06:21:52	-- common/autotest_common.sh@1189 -- # [[ -n '' ]]
00:11:35.607   06:21:52	-- common/autotest_common.sh@1194 -- # sleep 2
00:11:38.139   06:21:54	-- common/autotest_common.sh@1195 -- # (( i++ <= 15 ))
00:11:38.139    06:21:54	-- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL
00:11:38.139    06:21:54	-- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME
00:11:38.139   06:21:54	-- common/autotest_common.sh@1196 -- # nvme_devices=1
00:11:38.139   06:21:54	-- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter ))
00:11:38.139   06:21:54	-- common/autotest_common.sh@1197 -- # return 0
00:11:38.139   06:21:54	-- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:11:38.139  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:38.139   06:21:54	-- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:11:38.139   06:21:54	-- common/autotest_common.sh@1208 -- # local i=0
00:11:38.139   06:21:54	-- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL
00:11:38.139   06:21:54	-- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:38.139   06:21:54	-- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:38.139   06:21:54	-- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL
00:11:38.139   06:21:54	-- common/autotest_common.sh@1220 -- # return 0
00:11:38.139   06:21:54	-- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:11:38.139   06:21:54	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:38.139   06:21:54	-- common/autotest_common.sh@10 -- # set +x
00:11:38.139   06:21:54	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:38.139   06:21:54	-- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:38.139   06:21:54	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:38.139   06:21:54	-- common/autotest_common.sh@10 -- # set +x
00:11:38.139   06:21:54	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:38.139   06:21:54	-- target/rpc.sh@81 -- # for i in $(seq 1 $loops)
00:11:38.139   06:21:54	-- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:11:38.139   06:21:54	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:38.139   06:21:54	-- common/autotest_common.sh@10 -- # set +x
00:11:38.139   06:21:54	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:38.139   06:21:54	-- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:38.139   06:21:54	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:38.139   06:21:54	-- common/autotest_common.sh@10 -- # set +x
00:11:38.139  [2024-12-16 06:21:54.593042] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:38.139   06:21:54	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:38.139   06:21:54	-- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5
00:11:38.139   06:21:54	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:38.139   06:21:54	-- common/autotest_common.sh@10 -- # set +x
00:11:38.139   06:21:54	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:38.139   06:21:54	-- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:11:38.139   06:21:54	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:38.139   06:21:54	-- common/autotest_common.sh@10 -- # set +x
00:11:38.139   06:21:54	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:38.139   06:21:54	-- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:11:38.139   06:21:54	-- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME
00:11:38.139   06:21:54	-- common/autotest_common.sh@1187 -- # local i=0
00:11:38.139   06:21:54	-- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0
00:11:38.139   06:21:54	-- common/autotest_common.sh@1189 -- # [[ -n '' ]]
00:11:38.139   06:21:54	-- common/autotest_common.sh@1194 -- # sleep 2
00:11:40.040   06:21:56	-- common/autotest_common.sh@1195 -- # (( i++ <= 15 ))
00:11:40.040    06:21:56	-- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL
00:11:40.040    06:21:56	-- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME
00:11:40.040   06:21:56	-- common/autotest_common.sh@1196 -- # nvme_devices=1
00:11:40.040   06:21:56	-- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter ))
00:11:40.040   06:21:56	-- common/autotest_common.sh@1197 -- # return 0
00:11:40.040   06:21:56	-- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:11:40.040  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:40.040   06:21:56	-- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:11:40.040   06:21:56	-- common/autotest_common.sh@1208 -- # local i=0
00:11:40.040   06:21:56	-- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL
00:11:40.040   06:21:56	-- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:40.040   06:21:56	-- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL
00:11:40.040   06:21:56	-- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:40.040   06:21:56	-- common/autotest_common.sh@1220 -- # return 0
00:11:40.040   06:21:56	-- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:11:40.040   06:21:56	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:40.040   06:21:56	-- common/autotest_common.sh@10 -- # set +x
00:11:40.040   06:21:56	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:40.040   06:21:56	-- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:40.040   06:21:56	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:40.040   06:21:56	-- common/autotest_common.sh@10 -- # set +x
00:11:40.040   06:21:56	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:40.040   06:21:56	-- target/rpc.sh@81 -- # for i in $(seq 1 $loops)
00:11:40.040   06:21:56	-- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:11:40.040   06:21:56	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:40.040   06:21:56	-- common/autotest_common.sh@10 -- # set +x
00:11:40.040   06:21:56	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:40.040   06:21:56	-- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:40.040   06:21:56	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:40.040   06:21:56	-- common/autotest_common.sh@10 -- # set +x
00:11:40.040  [2024-12-16 06:21:56.892069] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:40.040   06:21:56	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:40.040   06:21:56	-- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5
00:11:40.040   06:21:56	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:40.040   06:21:56	-- common/autotest_common.sh@10 -- # set +x
00:11:40.040   06:21:56	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:40.040   06:21:56	-- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:11:40.040   06:21:56	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:40.040   06:21:56	-- common/autotest_common.sh@10 -- # set +x
00:11:40.040   06:21:56	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:40.040   06:21:56	-- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:11:40.298   06:21:57	-- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME
00:11:40.298   06:21:57	-- common/autotest_common.sh@1187 -- # local i=0
00:11:40.298   06:21:57	-- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0
00:11:40.298   06:21:57	-- common/autotest_common.sh@1189 -- # [[ -n '' ]]
00:11:40.298   06:21:57	-- common/autotest_common.sh@1194 -- # sleep 2
00:11:42.202   06:21:59	-- common/autotest_common.sh@1195 -- # (( i++ <= 15 ))
00:11:42.202    06:21:59	-- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL
00:11:42.202    06:21:59	-- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME
00:11:42.202   06:21:59	-- common/autotest_common.sh@1196 -- # nvme_devices=1
00:11:42.202   06:21:59	-- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter ))
00:11:42.202   06:21:59	-- common/autotest_common.sh@1197 -- # return 0
00:11:42.202   06:21:59	-- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:11:42.202  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:42.202   06:21:59	-- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:11:42.202   06:21:59	-- common/autotest_common.sh@1208 -- # local i=0
00:11:42.202   06:21:59	-- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL
00:11:42.202   06:21:59	-- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:42.202   06:21:59	-- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL
00:11:42.202   06:21:59	-- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:42.202   06:21:59	-- common/autotest_common.sh@1220 -- # return 0
00:11:42.202   06:21:59	-- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:11:42.202   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.202   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.461   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.461   06:21:59	-- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:42.461   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.461   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.461   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.461    06:21:59	-- target/rpc.sh@99 -- # seq 1 5
00:11:42.461   06:21:59	-- target/rpc.sh@99 -- # for i in $(seq 1 $loops)
00:11:42.461   06:21:59	-- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:11:42.461   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.461   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.461   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.461   06:21:59	-- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:42.461   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.461   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.461  [2024-12-16 06:21:59.211142] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:42.461   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.461   06:21:59	-- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:11:42.461   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.461   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.461   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.461   06:21:59	-- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:11:42.462   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.462   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.462   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.462   06:21:59	-- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:11:42.462   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.462   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.462   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.462   06:21:59	-- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:42.462   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.462   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.462   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.462   06:21:59	-- target/rpc.sh@99 -- # for i in $(seq 1 $loops)
00:11:42.462   06:21:59	-- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:11:42.462   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.462   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.462   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.462   06:21:59	-- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:42.462   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.462   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.462  [2024-12-16 06:21:59.259153] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:42.462   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.462   06:21:59	-- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:11:42.462   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.462   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.462   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.462   06:21:59	-- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:11:42.462   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.462   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.462   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.462   06:21:59	-- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:11:42.462   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.462   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.462   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.462   06:21:59	-- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:42.462   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.462   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.462   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.462   06:21:59	-- target/rpc.sh@99 -- # for i in $(seq 1 $loops)
00:11:42.462   06:21:59	-- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:11:42.462   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.462   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.462   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.462   06:21:59	-- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:42.462   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.462   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.462  [2024-12-16 06:21:59.311212] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:42.462   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.462   06:21:59	-- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:11:42.462   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.462   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.462   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.462   06:21:59	-- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:11:42.462   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.462   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.462   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.462   06:21:59	-- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:11:42.462   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.462   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.462   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.462   06:21:59	-- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:42.462   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.462   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.462   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.462   06:21:59	-- target/rpc.sh@99 -- # for i in $(seq 1 $loops)
00:11:42.462   06:21:59	-- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:11:42.462   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.462   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.462   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.462   06:21:59	-- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:42.462   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.462   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.462  [2024-12-16 06:21:59.359253] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:42.462   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.462   06:21:59	-- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:11:42.462   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.462   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.462   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.462   06:21:59	-- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:11:42.462   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.462   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.462   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.462   06:21:59	-- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:11:42.462   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.462   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.462   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.462   06:21:59	-- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:42.462   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.462   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.462   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.462   06:21:59	-- target/rpc.sh@99 -- # for i in $(seq 1 $loops)
00:11:42.462   06:21:59	-- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:11:42.462   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.462   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.462   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.462   06:21:59	-- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:42.462   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.462   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.462  [2024-12-16 06:21:59.407321] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:42.462   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.462   06:21:59	-- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:11:42.462   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.462   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.462   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.462   06:21:59	-- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:11:42.462   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.462   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.462   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.462   06:21:59	-- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:11:42.462   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.462   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.721   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.721   06:21:59	-- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:42.721   06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.721   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.721   06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.721    06:21:59	-- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats
00:11:42.721    06:21:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.721    06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:42.721    06:21:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.721   06:21:59	-- target/rpc.sh@110 -- # stats='{
00:11:42.721  "poll_groups": [
00:11:42.721  {
00:11:42.721  "admin_qpairs": 2,
00:11:42.721  "completed_nvme_io": 66,
00:11:42.721  "current_admin_qpairs": 0,
00:11:42.721  "current_io_qpairs": 0,
00:11:42.721  "io_qpairs": 16,
00:11:42.721  "name": "nvmf_tgt_poll_group_0",
00:11:42.721  "pending_bdev_io": 0,
00:11:42.721  "transports": [
00:11:42.721  {
00:11:42.721  "trtype": "TCP"
00:11:42.721  }
00:11:42.721  ]
00:11:42.721  },
00:11:42.721  {
00:11:42.721  "admin_qpairs": 3,
00:11:42.721  "completed_nvme_io": 116,
00:11:42.721  "current_admin_qpairs": 0,
00:11:42.721  "current_io_qpairs": 0,
00:11:42.721  "io_qpairs": 17,
00:11:42.721  "name": "nvmf_tgt_poll_group_1",
00:11:42.721  "pending_bdev_io": 0,
00:11:42.721  "transports": [
00:11:42.721  {
00:11:42.721  "trtype": "TCP"
00:11:42.721  }
00:11:42.721  ]
00:11:42.721  },
00:11:42.721  {
00:11:42.721  "admin_qpairs": 1,
00:11:42.721  "completed_nvme_io": 169,
00:11:42.721  "current_admin_qpairs": 0,
00:11:42.721  "current_io_qpairs": 0,
00:11:42.721  "io_qpairs": 19,
00:11:42.721  "name": "nvmf_tgt_poll_group_2",
00:11:42.721  "pending_bdev_io": 0,
00:11:42.721  "transports": [
00:11:42.721  {
00:11:42.721  "trtype": "TCP"
00:11:42.721  }
00:11:42.721  ]
00:11:42.721  },
00:11:42.721  {
00:11:42.721  "admin_qpairs": 1,
00:11:42.721  "completed_nvme_io": 69,
00:11:42.721  "current_admin_qpairs": 0,
00:11:42.721  "current_io_qpairs": 0,
00:11:42.721  "io_qpairs": 18,
00:11:42.721  "name": "nvmf_tgt_poll_group_3",
00:11:42.722  "pending_bdev_io": 0,
00:11:42.722  "transports": [
00:11:42.722  {
00:11:42.722  "trtype": "TCP"
00:11:42.722  }
00:11:42.722  ]
00:11:42.722  }
00:11:42.722  ],
00:11:42.722  "tick_rate": 2200000000
00:11:42.722  }'
00:11:42.722    06:21:59	-- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs'
00:11:42.722    06:21:59	-- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs'
00:11:42.722    06:21:59	-- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs'
00:11:42.722    06:21:59	-- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}'
00:11:42.722   06:21:59	-- target/rpc.sh@112 -- # (( 7 > 0 ))
00:11:42.722    06:21:59	-- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs'
00:11:42.722    06:21:59	-- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs'
00:11:42.722    06:21:59	-- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs'
00:11:42.722    06:21:59	-- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}'
00:11:42.722   06:21:59	-- target/rpc.sh@113 -- # (( 70 > 0 ))
00:11:42.722   06:21:59	-- target/rpc.sh@115 -- # '[' rdma == tcp ']'
00:11:42.722   06:21:59	-- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT
00:11:42.722   06:21:59	-- target/rpc.sh@123 -- # nvmftestfini
00:11:42.722   06:21:59	-- nvmf/common.sh@476 -- # nvmfcleanup
00:11:42.722   06:21:59	-- nvmf/common.sh@116 -- # sync
00:11:42.722   06:21:59	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:11:42.722   06:21:59	-- nvmf/common.sh@119 -- # set +e
00:11:42.722   06:21:59	-- nvmf/common.sh@120 -- # for i in {1..20}
00:11:42.722   06:21:59	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:11:42.722  rmmod nvme_tcp
00:11:42.722  rmmod nvme_fabrics
00:11:42.722  rmmod nvme_keyring
00:11:42.722   06:21:59	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:11:42.722   06:21:59	-- nvmf/common.sh@123 -- # set -e
00:11:42.722   06:21:59	-- nvmf/common.sh@124 -- # return 0
00:11:42.722   06:21:59	-- nvmf/common.sh@477 -- # '[' -n 65951 ']'
00:11:42.722   06:21:59	-- nvmf/common.sh@478 -- # killprocess 65951
00:11:42.722   06:21:59	-- common/autotest_common.sh@936 -- # '[' -z 65951 ']'
00:11:42.722   06:21:59	-- common/autotest_common.sh@940 -- # kill -0 65951
00:11:42.722    06:21:59	-- common/autotest_common.sh@941 -- # uname
00:11:42.722   06:21:59	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:11:42.722    06:21:59	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65951
00:11:42.722  killing process with pid 65951
00:11:42.722   06:21:59	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:11:42.722   06:21:59	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:11:42.722   06:21:59	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 65951'
00:11:42.722   06:21:59	-- common/autotest_common.sh@955 -- # kill 65951
00:11:42.722   06:21:59	-- common/autotest_common.sh@960 -- # wait 65951
00:11:43.290   06:21:59	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:11:43.290   06:21:59	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:11:43.290   06:21:59	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:11:43.290   06:21:59	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:11:43.290   06:21:59	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:11:43.290   06:21:59	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:43.290   06:21:59	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:11:43.290    06:21:59	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:43.290   06:21:59	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:11:43.290  ************************************
00:11:43.290  END TEST nvmf_rpc
00:11:43.290  ************************************
00:11:43.290  
00:11:43.290  real	0m18.965s
00:11:43.290  user	1m11.329s
00:11:43.290  sys	0m2.511s
00:11:43.290   06:21:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:43.290   06:21:59	-- common/autotest_common.sh@10 -- # set +x
00:11:43.290   06:22:00	-- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp
00:11:43.290   06:22:00	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:11:43.290   06:22:00	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:43.290   06:22:00	-- common/autotest_common.sh@10 -- # set +x
00:11:43.290  ************************************
00:11:43.290  START TEST nvmf_invalid
00:11:43.290  ************************************
00:11:43.290   06:22:00	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp
00:11:43.290  * Looking for test storage...
00:11:43.290  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:11:43.290    06:22:00	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:11:43.290     06:22:00	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:11:43.290     06:22:00	-- common/autotest_common.sh@1690 -- # lcov --version
00:11:43.290    06:22:00	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:11:43.290    06:22:00	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:11:43.290    06:22:00	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:11:43.290    06:22:00	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:11:43.290    06:22:00	-- scripts/common.sh@335 -- # IFS=.-:
00:11:43.290    06:22:00	-- scripts/common.sh@335 -- # read -ra ver1
00:11:43.290    06:22:00	-- scripts/common.sh@336 -- # IFS=.-:
00:11:43.290    06:22:00	-- scripts/common.sh@336 -- # read -ra ver2
00:11:43.290    06:22:00	-- scripts/common.sh@337 -- # local 'op=<'
00:11:43.290    06:22:00	-- scripts/common.sh@339 -- # ver1_l=2
00:11:43.290    06:22:00	-- scripts/common.sh@340 -- # ver2_l=1
00:11:43.290    06:22:00	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:11:43.290    06:22:00	-- scripts/common.sh@343 -- # case "$op" in
00:11:43.290    06:22:00	-- scripts/common.sh@344 -- # : 1
00:11:43.290    06:22:00	-- scripts/common.sh@363 -- # (( v = 0 ))
00:11:43.290    06:22:00	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:43.290     06:22:00	-- scripts/common.sh@364 -- # decimal 1
00:11:43.290     06:22:00	-- scripts/common.sh@352 -- # local d=1
00:11:43.290     06:22:00	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:43.290     06:22:00	-- scripts/common.sh@354 -- # echo 1
00:11:43.290    06:22:00	-- scripts/common.sh@364 -- # ver1[v]=1
00:11:43.290     06:22:00	-- scripts/common.sh@365 -- # decimal 2
00:11:43.290     06:22:00	-- scripts/common.sh@352 -- # local d=2
00:11:43.290     06:22:00	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:43.290     06:22:00	-- scripts/common.sh@354 -- # echo 2
00:11:43.290    06:22:00	-- scripts/common.sh@365 -- # ver2[v]=2
00:11:43.290    06:22:00	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:11:43.290    06:22:00	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:11:43.290    06:22:00	-- scripts/common.sh@367 -- # return 0
00:11:43.290    06:22:00	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:43.290    06:22:00	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:11:43.290  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:43.290  		--rc genhtml_branch_coverage=1
00:11:43.290  		--rc genhtml_function_coverage=1
00:11:43.290  		--rc genhtml_legend=1
00:11:43.290  		--rc geninfo_all_blocks=1
00:11:43.290  		--rc geninfo_unexecuted_blocks=1
00:11:43.290  		
00:11:43.290  		'
00:11:43.290    06:22:00	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:11:43.290  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:43.290  		--rc genhtml_branch_coverage=1
00:11:43.290  		--rc genhtml_function_coverage=1
00:11:43.290  		--rc genhtml_legend=1
00:11:43.290  		--rc geninfo_all_blocks=1
00:11:43.290  		--rc geninfo_unexecuted_blocks=1
00:11:43.290  		
00:11:43.290  		'
00:11:43.290    06:22:00	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:11:43.290  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:43.290  		--rc genhtml_branch_coverage=1
00:11:43.290  		--rc genhtml_function_coverage=1
00:11:43.290  		--rc genhtml_legend=1
00:11:43.290  		--rc geninfo_all_blocks=1
00:11:43.290  		--rc geninfo_unexecuted_blocks=1
00:11:43.290  		
00:11:43.290  		'
00:11:43.290    06:22:00	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:11:43.290  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:43.290  		--rc genhtml_branch_coverage=1
00:11:43.290  		--rc genhtml_function_coverage=1
00:11:43.290  		--rc genhtml_legend=1
00:11:43.290  		--rc geninfo_all_blocks=1
00:11:43.290  		--rc geninfo_unexecuted_blocks=1
00:11:43.290  		
00:11:43.290  		'
00:11:43.290   06:22:00	-- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:11:43.290     06:22:00	-- nvmf/common.sh@7 -- # uname -s
00:11:43.290    06:22:00	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:11:43.290    06:22:00	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:11:43.290    06:22:00	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:11:43.290    06:22:00	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:11:43.290    06:22:00	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:11:43.290    06:22:00	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:11:43.290    06:22:00	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:11:43.290    06:22:00	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:11:43.290    06:22:00	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:11:43.290     06:22:00	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:11:43.290    06:22:00	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:11:43.290    06:22:00	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:11:43.290    06:22:00	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:11:43.290    06:22:00	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:11:43.290    06:22:00	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:11:43.290    06:22:00	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:11:43.290     06:22:00	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:43.290     06:22:00	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:43.290     06:22:00	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:43.290      06:22:00	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:43.291      06:22:00	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:43.291      06:22:00	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:43.291      06:22:00	-- paths/export.sh@5 -- # export PATH
00:11:43.291      06:22:00	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:43.291    06:22:00	-- nvmf/common.sh@46 -- # : 0
00:11:43.291    06:22:00	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:11:43.291    06:22:00	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:11:43.291    06:22:00	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:11:43.291    06:22:00	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:11:43.291    06:22:00	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:11:43.291    06:22:00	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:11:43.291    06:22:00	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:11:43.291    06:22:00	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:11:43.291   06:22:00	-- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py
00:11:43.291   06:22:00	-- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:11:43.291   06:22:00	-- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode
00:11:43.291   06:22:00	-- target/invalid.sh@14 -- # target=foobar
00:11:43.291   06:22:00	-- target/invalid.sh@16 -- # RANDOM=0
00:11:43.291   06:22:00	-- target/invalid.sh@34 -- # nvmftestinit
00:11:43.291   06:22:00	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:11:43.291   06:22:00	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:11:43.291   06:22:00	-- nvmf/common.sh@436 -- # prepare_net_devs
00:11:43.291   06:22:00	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:11:43.291   06:22:00	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:11:43.291   06:22:00	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:43.291   06:22:00	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:11:43.291    06:22:00	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:43.550   06:22:00	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:11:43.550   06:22:00	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:11:43.550   06:22:00	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:11:43.550   06:22:00	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:11:43.550   06:22:00	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:11:43.550   06:22:00	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:11:43.550   06:22:00	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:11:43.550   06:22:00	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:11:43.550   06:22:00	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:11:43.550   06:22:00	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:11:43.550   06:22:00	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:11:43.550   06:22:00	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:11:43.550   06:22:00	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:11:43.550   06:22:00	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:11:43.550   06:22:00	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:11:43.550   06:22:00	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:11:43.550   06:22:00	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:11:43.550   06:22:00	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:11:43.550   06:22:00	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:11:43.550   06:22:00	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:11:43.550  Cannot find device "nvmf_tgt_br"
00:11:43.550   06:22:00	-- nvmf/common.sh@154 -- # true
00:11:43.550   06:22:00	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:11:43.550  Cannot find device "nvmf_tgt_br2"
00:11:43.550   06:22:00	-- nvmf/common.sh@155 -- # true
00:11:43.550   06:22:00	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:11:43.550   06:22:00	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:11:43.550  Cannot find device "nvmf_tgt_br"
00:11:43.550   06:22:00	-- nvmf/common.sh@157 -- # true
00:11:43.550   06:22:00	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:11:43.550  Cannot find device "nvmf_tgt_br2"
00:11:43.550   06:22:00	-- nvmf/common.sh@158 -- # true
00:11:43.550   06:22:00	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:11:43.550   06:22:00	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:11:43.550   06:22:00	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:11:43.550  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:11:43.550   06:22:00	-- nvmf/common.sh@161 -- # true
00:11:43.550   06:22:00	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:11:43.551  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:11:43.551   06:22:00	-- nvmf/common.sh@162 -- # true
00:11:43.551   06:22:00	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:11:43.551   06:22:00	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:11:43.551   06:22:00	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:11:43.551   06:22:00	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:11:43.551   06:22:00	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:11:43.551   06:22:00	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:11:43.551   06:22:00	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:11:43.551   06:22:00	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:11:43.551   06:22:00	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:11:43.551   06:22:00	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:11:43.551   06:22:00	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:11:43.551   06:22:00	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:11:43.551   06:22:00	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:11:43.551   06:22:00	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:11:43.551   06:22:00	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:11:43.809   06:22:00	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:11:43.809   06:22:00	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:11:43.809   06:22:00	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:11:43.809   06:22:00	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:11:43.809   06:22:00	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:11:43.809   06:22:00	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:11:43.809   06:22:00	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:11:43.809   06:22:00	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:11:43.809   06:22:00	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:11:43.809  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:11:43.809  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms
00:11:43.809  
00:11:43.809  --- 10.0.0.2 ping statistics ---
00:11:43.809  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:43.809  rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms
00:11:43.809   06:22:00	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:11:43.809  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:11:43.809  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms
00:11:43.809  
00:11:43.809  --- 10.0.0.3 ping statistics ---
00:11:43.809  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:43.809  rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms
00:11:43.809   06:22:00	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:11:43.809  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:11:43.809  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms
00:11:43.809  
00:11:43.809  --- 10.0.0.1 ping statistics ---
00:11:43.809  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:43.809  rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms
00:11:43.809   06:22:00	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:11:43.810   06:22:00	-- nvmf/common.sh@421 -- # return 0
00:11:43.810   06:22:00	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:11:43.810   06:22:00	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:11:43.810   06:22:00	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:11:43.810   06:22:00	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:11:43.810   06:22:00	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:11:43.810   06:22:00	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:11:43.810   06:22:00	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:11:43.810   06:22:00	-- target/invalid.sh@35 -- # nvmfappstart -m 0xF
00:11:43.810   06:22:00	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:11:43.810   06:22:00	-- common/autotest_common.sh@722 -- # xtrace_disable
00:11:43.810   06:22:00	-- common/autotest_common.sh@10 -- # set +x
00:11:43.810   06:22:00	-- nvmf/common.sh@469 -- # nvmfpid=66474
00:11:43.810   06:22:00	-- nvmf/common.sh@470 -- # waitforlisten 66474
00:11:43.810   06:22:00	-- common/autotest_common.sh@829 -- # '[' -z 66474 ']'
00:11:43.810   06:22:00	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:11:43.810   06:22:00	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:43.810   06:22:00	-- common/autotest_common.sh@834 -- # local max_retries=100
00:11:43.810  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:43.810   06:22:00	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:43.810   06:22:00	-- common/autotest_common.sh@838 -- # xtrace_disable
00:11:43.810   06:22:00	-- common/autotest_common.sh@10 -- # set +x
00:11:43.810  [2024-12-16 06:22:00.692501] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:11:43.810  [2024-12-16 06:22:00.692586] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:44.069  [2024-12-16 06:22:00.831727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:11:44.069  [2024-12-16 06:22:00.921963] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:11:44.069  [2024-12-16 06:22:00.922417] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:11:44.069  [2024-12-16 06:22:00.922705] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:11:44.069  [2024-12-16 06:22:00.922948] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:11:44.069  [2024-12-16 06:22:00.923328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:11:44.069  [2024-12-16 06:22:00.923474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:11:44.069  [2024-12-16 06:22:00.923543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:44.069  [2024-12-16 06:22:00.923543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:11:45.004   06:22:01	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:11:45.004   06:22:01	-- common/autotest_common.sh@862 -- # return 0
00:11:45.004   06:22:01	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:11:45.004   06:22:01	-- common/autotest_common.sh@728 -- # xtrace_disable
00:11:45.004   06:22:01	-- common/autotest_common.sh@10 -- # set +x
00:11:45.004   06:22:01	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:11:45.004   06:22:01	-- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT
00:11:45.004    06:22:01	-- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode17693
00:11:45.004  [2024-12-16 06:22:01.947897] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar
00:11:45.004   06:22:01	-- target/invalid.sh@40 -- # out='2024/12/16 06:22:01 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode17693 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar
00:11:45.004  request:
00:11:45.004  {
00:11:45.004    "method": "nvmf_create_subsystem",
00:11:45.004    "params": {
00:11:45.004      "nqn": "nqn.2016-06.io.spdk:cnode17693",
00:11:45.004      "tgt_name": "foobar"
00:11:45.004    }
00:11:45.004  }
00:11:45.004  Got JSON-RPC error response
00:11:45.004  GoRPCClient: error on JSON-RPC call'
00:11:45.004   06:22:01	-- target/invalid.sh@41 -- # [[ 2024/12/16 06:22:01 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode17693 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar
00:11:45.004  request:
00:11:45.004  {
00:11:45.004    "method": "nvmf_create_subsystem",
00:11:45.004    "params": {
00:11:45.004      "nqn": "nqn.2016-06.io.spdk:cnode17693",
00:11:45.004      "tgt_name": "foobar"
00:11:45.004    }
00:11:45.004  }
00:11:45.004  Got JSON-RPC error response
00:11:45.004  GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]]
00:11:45.004     06:22:01	-- target/invalid.sh@45 -- # echo -e '\x1f'
00:11:45.004    06:22:01	-- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15088
00:11:45.571  [2024-12-16 06:22:02.244185] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15088: invalid serial number 'SPDKISFASTANDAWESOME'
00:11:45.571   06:22:02	-- target/invalid.sh@45 -- # out='2024/12/16 06:22:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15088 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME
00:11:45.571  request:
00:11:45.571  {
00:11:45.571    "method": "nvmf_create_subsystem",
00:11:45.571    "params": {
00:11:45.571      "nqn": "nqn.2016-06.io.spdk:cnode15088",
00:11:45.571      "serial_number": "SPDKISFASTANDAWESOME\u001f"
00:11:45.571    }
00:11:45.571  }
00:11:45.571  Got JSON-RPC error response
00:11:45.571  GoRPCClient: error on JSON-RPC call'
00:11:45.571   06:22:02	-- target/invalid.sh@46 -- # [[ 2024/12/16 06:22:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15088 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME
00:11:45.571  request:
00:11:45.571  {
00:11:45.571    "method": "nvmf_create_subsystem",
00:11:45.571    "params": {
00:11:45.571      "nqn": "nqn.2016-06.io.spdk:cnode15088",
00:11:45.571      "serial_number": "SPDKISFASTANDAWESOME\u001f"
00:11:45.571    }
00:11:45.571  }
00:11:45.571  Got JSON-RPC error response
00:11:45.571  GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]]
00:11:45.571     06:22:02	-- target/invalid.sh@50 -- # echo -e '\x1f'
00:11:45.571    06:22:02	-- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode28389
00:11:45.571  [2024-12-16 06:22:02.536442] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28389: invalid model number 'SPDK_Controller'
00:11:45.830   06:22:02	-- target/invalid.sh@50 -- # out='2024/12/16 06:22:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode28389], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller
00:11:45.830  request:
00:11:45.830  {
00:11:45.830    "method": "nvmf_create_subsystem",
00:11:45.830    "params": {
00:11:45.830      "nqn": "nqn.2016-06.io.spdk:cnode28389",
00:11:45.830      "model_number": "SPDK_Controller\u001f"
00:11:45.830    }
00:11:45.830  }
00:11:45.830  Got JSON-RPC error response
00:11:45.830  GoRPCClient: error on JSON-RPC call'
00:11:45.830   06:22:02	-- target/invalid.sh@51 -- # [[ 2024/12/16 06:22:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode28389], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller
00:11:45.830  request:
00:11:45.830  {
00:11:45.830    "method": "nvmf_create_subsystem",
00:11:45.830    "params": {
00:11:45.830      "nqn": "nqn.2016-06.io.spdk:cnode28389",
00:11:45.830      "model_number": "SPDK_Controller\u001f"
00:11:45.830    }
00:11:45.830  }
00:11:45.830  Got JSON-RPC error response
00:11:45.830  GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]]
00:11:45.830     06:22:02	-- target/invalid.sh@54 -- # gen_random_s 21
00:11:45.830     06:22:02	-- target/invalid.sh@19 -- # local length=21 ll
00:11:45.831     06:22:02	-- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127')
00:11:45.831     06:22:02	-- target/invalid.sh@21 -- # local chars
00:11:45.831     06:22:02	-- target/invalid.sh@22 -- # local string
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll = 0 ))
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:45.831       06:22:02	-- target/invalid.sh@25 -- # printf %x 39
00:11:45.831      06:22:02	-- target/invalid.sh@25 -- # echo -e '\x27'
00:11:45.831     06:22:02	-- target/invalid.sh@25 -- # string+=\'
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:45.831       06:22:02	-- target/invalid.sh@25 -- # printf %x 106
00:11:45.831      06:22:02	-- target/invalid.sh@25 -- # echo -e '\x6a'
00:11:45.831     06:22:02	-- target/invalid.sh@25 -- # string+=j
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:45.831       06:22:02	-- target/invalid.sh@25 -- # printf %x 41
00:11:45.831      06:22:02	-- target/invalid.sh@25 -- # echo -e '\x29'
00:11:45.831     06:22:02	-- target/invalid.sh@25 -- # string+=')'
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:45.831       06:22:02	-- target/invalid.sh@25 -- # printf %x 114
00:11:45.831      06:22:02	-- target/invalid.sh@25 -- # echo -e '\x72'
00:11:45.831     06:22:02	-- target/invalid.sh@25 -- # string+=r
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:45.831       06:22:02	-- target/invalid.sh@25 -- # printf %x 65
00:11:45.831      06:22:02	-- target/invalid.sh@25 -- # echo -e '\x41'
00:11:45.831     06:22:02	-- target/invalid.sh@25 -- # string+=A
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:45.831       06:22:02	-- target/invalid.sh@25 -- # printf %x 32
00:11:45.831      06:22:02	-- target/invalid.sh@25 -- # echo -e '\x20'
00:11:45.831     06:22:02	-- target/invalid.sh@25 -- # string+=' '
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:45.831       06:22:02	-- target/invalid.sh@25 -- # printf %x 87
00:11:45.831      06:22:02	-- target/invalid.sh@25 -- # echo -e '\x57'
00:11:45.831     06:22:02	-- target/invalid.sh@25 -- # string+=W
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:45.831       06:22:02	-- target/invalid.sh@25 -- # printf %x 109
00:11:45.831      06:22:02	-- target/invalid.sh@25 -- # echo -e '\x6d'
00:11:45.831     06:22:02	-- target/invalid.sh@25 -- # string+=m
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:45.831       06:22:02	-- target/invalid.sh@25 -- # printf %x 32
00:11:45.831      06:22:02	-- target/invalid.sh@25 -- # echo -e '\x20'
00:11:45.831     06:22:02	-- target/invalid.sh@25 -- # string+=' '
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:45.831       06:22:02	-- target/invalid.sh@25 -- # printf %x 112
00:11:45.831      06:22:02	-- target/invalid.sh@25 -- # echo -e '\x70'
00:11:45.831     06:22:02	-- target/invalid.sh@25 -- # string+=p
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:45.831       06:22:02	-- target/invalid.sh@25 -- # printf %x 85
00:11:45.831      06:22:02	-- target/invalid.sh@25 -- # echo -e '\x55'
00:11:45.831     06:22:02	-- target/invalid.sh@25 -- # string+=U
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:45.831       06:22:02	-- target/invalid.sh@25 -- # printf %x 47
00:11:45.831      06:22:02	-- target/invalid.sh@25 -- # echo -e '\x2f'
00:11:45.831     06:22:02	-- target/invalid.sh@25 -- # string+=/
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:45.831       06:22:02	-- target/invalid.sh@25 -- # printf %x 102
00:11:45.831      06:22:02	-- target/invalid.sh@25 -- # echo -e '\x66'
00:11:45.831     06:22:02	-- target/invalid.sh@25 -- # string+=f
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:45.831       06:22:02	-- target/invalid.sh@25 -- # printf %x 75
00:11:45.831      06:22:02	-- target/invalid.sh@25 -- # echo -e '\x4b'
00:11:45.831     06:22:02	-- target/invalid.sh@25 -- # string+=K
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:45.831       06:22:02	-- target/invalid.sh@25 -- # printf %x 62
00:11:45.831      06:22:02	-- target/invalid.sh@25 -- # echo -e '\x3e'
00:11:45.831     06:22:02	-- target/invalid.sh@25 -- # string+='>'
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:45.831       06:22:02	-- target/invalid.sh@25 -- # printf %x 79
00:11:45.831      06:22:02	-- target/invalid.sh@25 -- # echo -e '\x4f'
00:11:45.831     06:22:02	-- target/invalid.sh@25 -- # string+=O
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:45.831       06:22:02	-- target/invalid.sh@25 -- # printf %x 75
00:11:45.831      06:22:02	-- target/invalid.sh@25 -- # echo -e '\x4b'
00:11:45.831     06:22:02	-- target/invalid.sh@25 -- # string+=K
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:45.831       06:22:02	-- target/invalid.sh@25 -- # printf %x 69
00:11:45.831      06:22:02	-- target/invalid.sh@25 -- # echo -e '\x45'
00:11:45.831     06:22:02	-- target/invalid.sh@25 -- # string+=E
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:45.831       06:22:02	-- target/invalid.sh@25 -- # printf %x 90
00:11:45.831      06:22:02	-- target/invalid.sh@25 -- # echo -e '\x5a'
00:11:45.831     06:22:02	-- target/invalid.sh@25 -- # string+=Z
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:45.831       06:22:02	-- target/invalid.sh@25 -- # printf %x 123
00:11:45.831      06:22:02	-- target/invalid.sh@25 -- # echo -e '\x7b'
00:11:45.831     06:22:02	-- target/invalid.sh@25 -- # string+='{'
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:45.831       06:22:02	-- target/invalid.sh@25 -- # printf %x 96
00:11:45.831      06:22:02	-- target/invalid.sh@25 -- # echo -e '\x60'
00:11:45.831     06:22:02	-- target/invalid.sh@25 -- # string+='`'
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:45.831     06:22:02	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:45.831     06:22:02	-- target/invalid.sh@28 -- # [[ ' == \- ]]
00:11:45.831     06:22:02	-- target/invalid.sh@31 -- # echo ''\''j)rA Wm pU/fK>OKEZ{`'
00:11:45.831    06:22:02	-- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s ''\''j)rA Wm pU/fK>OKEZ{`' nqn.2016-06.io.spdk:cnode28221
00:11:46.090  [2024-12-16 06:22:02.952833] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28221: invalid serial number ''j)rA Wm pU/fK>OKEZ{`'
00:11:46.090   06:22:02	-- target/invalid.sh@54 -- # out='2024/12/16 06:22:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode28221 serial_number:'\''j)rA Wm pU/fK>OKEZ{`], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN '\''j)rA Wm pU/fK>OKEZ{`
00:11:46.090  request:
00:11:46.090  {
00:11:46.090    "method": "nvmf_create_subsystem",
00:11:46.090    "params": {
00:11:46.090      "nqn": "nqn.2016-06.io.spdk:cnode28221",
00:11:46.090      "serial_number": "'\''j)rA Wm pU/fK>OKEZ{`"
00:11:46.090    }
00:11:46.090  }
00:11:46.090  Got JSON-RPC error response
00:11:46.090  GoRPCClient: error on JSON-RPC call'
00:11:46.090   06:22:02	-- target/invalid.sh@55 -- # [[ 2024/12/16 06:22:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode28221 serial_number:'j)rA Wm pU/fK>OKEZ{`], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN 'j)rA Wm pU/fK>OKEZ{`
00:11:46.090  request:
00:11:46.090  {
00:11:46.090    "method": "nvmf_create_subsystem",
00:11:46.090    "params": {
00:11:46.090      "nqn": "nqn.2016-06.io.spdk:cnode28221",
00:11:46.090      "serial_number": "'j)rA Wm pU/fK>OKEZ{`"
00:11:46.090    }
00:11:46.090  }
00:11:46.090  Got JSON-RPC error response
00:11:46.090  GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]]
00:11:46.090     06:22:02	-- target/invalid.sh@58 -- # gen_random_s 41
00:11:46.090     06:22:02	-- target/invalid.sh@19 -- # local length=41 ll
00:11:46.090     06:22:02	-- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127')
00:11:46.090     06:22:02	-- target/invalid.sh@21 -- # local chars
00:11:46.090     06:22:02	-- target/invalid.sh@22 -- # local string
00:11:46.090     06:22:02	-- target/invalid.sh@24 -- # (( ll = 0 ))
00:11:46.090     06:22:02	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.090       06:22:02	-- target/invalid.sh@25 -- # printf %x 42
00:11:46.090      06:22:02	-- target/invalid.sh@25 -- # echo -e '\x2a'
00:11:46.090     06:22:02	-- target/invalid.sh@25 -- # string+='*'
00:11:46.090     06:22:02	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.090     06:22:02	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.090       06:22:02	-- target/invalid.sh@25 -- # printf %x 63
00:11:46.090      06:22:02	-- target/invalid.sh@25 -- # echo -e '\x3f'
00:11:46.090     06:22:02	-- target/invalid.sh@25 -- # string+='?'
00:11:46.090     06:22:02	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.090     06:22:02	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.090       06:22:02	-- target/invalid.sh@25 -- # printf %x 54
00:11:46.090      06:22:02	-- target/invalid.sh@25 -- # echo -e '\x36'
00:11:46.090     06:22:02	-- target/invalid.sh@25 -- # string+=6
00:11:46.090     06:22:02	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.090     06:22:02	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.090       06:22:03	-- target/invalid.sh@25 -- # printf %x 34
00:11:46.090      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x22'
00:11:46.090     06:22:03	-- target/invalid.sh@25 -- # string+='"'
00:11:46.090     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.090     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.090       06:22:03	-- target/invalid.sh@25 -- # printf %x 71
00:11:46.090      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x47'
00:11:46.090     06:22:03	-- target/invalid.sh@25 -- # string+=G
00:11:46.090     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.090     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.090       06:22:03	-- target/invalid.sh@25 -- # printf %x 118
00:11:46.090      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x76'
00:11:46.090     06:22:03	-- target/invalid.sh@25 -- # string+=v
00:11:46.090     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.090     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.090       06:22:03	-- target/invalid.sh@25 -- # printf %x 48
00:11:46.090      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x30'
00:11:46.090     06:22:03	-- target/invalid.sh@25 -- # string+=0
00:11:46.090     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.090     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.090       06:22:03	-- target/invalid.sh@25 -- # printf %x 63
00:11:46.090      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x3f'
00:11:46.090     06:22:03	-- target/invalid.sh@25 -- # string+='?'
00:11:46.090     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.090     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.090       06:22:03	-- target/invalid.sh@25 -- # printf %x 102
00:11:46.090      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x66'
00:11:46.090     06:22:03	-- target/invalid.sh@25 -- # string+=f
00:11:46.090     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.090     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.090       06:22:03	-- target/invalid.sh@25 -- # printf %x 78
00:11:46.090      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x4e'
00:11:46.090     06:22:03	-- target/invalid.sh@25 -- # string+=N
00:11:46.090     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.090     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.090       06:22:03	-- target/invalid.sh@25 -- # printf %x 83
00:11:46.090      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x53'
00:11:46.090     06:22:03	-- target/invalid.sh@25 -- # string+=S
00:11:46.090     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.090     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.090       06:22:03	-- target/invalid.sh@25 -- # printf %x 106
00:11:46.090      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x6a'
00:11:46.090     06:22:03	-- target/invalid.sh@25 -- # string+=j
00:11:46.090     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.090     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.090       06:22:03	-- target/invalid.sh@25 -- # printf %x 83
00:11:46.090      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x53'
00:11:46.090     06:22:03	-- target/invalid.sh@25 -- # string+=S
00:11:46.090     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.090     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.090       06:22:03	-- target/invalid.sh@25 -- # printf %x 62
00:11:46.090      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x3e'
00:11:46.090     06:22:03	-- target/invalid.sh@25 -- # string+='>'
00:11:46.090     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.090     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.349       06:22:03	-- target/invalid.sh@25 -- # printf %x 50
00:11:46.349      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x32'
00:11:46.349     06:22:03	-- target/invalid.sh@25 -- # string+=2
00:11:46.349     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.349     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.349       06:22:03	-- target/invalid.sh@25 -- # printf %x 71
00:11:46.349      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x47'
00:11:46.349     06:22:03	-- target/invalid.sh@25 -- # string+=G
00:11:46.349     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.349     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.349       06:22:03	-- target/invalid.sh@25 -- # printf %x 115
00:11:46.349      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x73'
00:11:46.349     06:22:03	-- target/invalid.sh@25 -- # string+=s
00:11:46.349     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.349     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.349       06:22:03	-- target/invalid.sh@25 -- # printf %x 108
00:11:46.349      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x6c'
00:11:46.349     06:22:03	-- target/invalid.sh@25 -- # string+=l
00:11:46.349     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.349     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.349       06:22:03	-- target/invalid.sh@25 -- # printf %x 58
00:11:46.349      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x3a'
00:11:46.349     06:22:03	-- target/invalid.sh@25 -- # string+=:
00:11:46.349     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.349     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.349       06:22:03	-- target/invalid.sh@25 -- # printf %x 44
00:11:46.349      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x2c'
00:11:46.349     06:22:03	-- target/invalid.sh@25 -- # string+=,
00:11:46.349     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.349     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.349       06:22:03	-- target/invalid.sh@25 -- # printf %x 71
00:11:46.349      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x47'
00:11:46.349     06:22:03	-- target/invalid.sh@25 -- # string+=G
00:11:46.349     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.349     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.349       06:22:03	-- target/invalid.sh@25 -- # printf %x 56
00:11:46.349      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x38'
00:11:46.349     06:22:03	-- target/invalid.sh@25 -- # string+=8
00:11:46.349     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.349     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.349       06:22:03	-- target/invalid.sh@25 -- # printf %x 63
00:11:46.349      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x3f'
00:11:46.349     06:22:03	-- target/invalid.sh@25 -- # string+='?'
00:11:46.349     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.349     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.349       06:22:03	-- target/invalid.sh@25 -- # printf %x 121
00:11:46.349      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x79'
00:11:46.349     06:22:03	-- target/invalid.sh@25 -- # string+=y
00:11:46.349     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.349     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.349       06:22:03	-- target/invalid.sh@25 -- # printf %x 59
00:11:46.349      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x3b'
00:11:46.349     06:22:03	-- target/invalid.sh@25 -- # string+=';'
00:11:46.349     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.349     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.349       06:22:03	-- target/invalid.sh@25 -- # printf %x 63
00:11:46.349      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x3f'
00:11:46.349     06:22:03	-- target/invalid.sh@25 -- # string+='?'
00:11:46.349     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.349     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.349       06:22:03	-- target/invalid.sh@25 -- # printf %x 119
00:11:46.349      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x77'
00:11:46.349     06:22:03	-- target/invalid.sh@25 -- # string+=w
00:11:46.349     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.349     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.349       06:22:03	-- target/invalid.sh@25 -- # printf %x 126
00:11:46.349      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x7e'
00:11:46.350     06:22:03	-- target/invalid.sh@25 -- # string+='~'
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.350       06:22:03	-- target/invalid.sh@25 -- # printf %x 38
00:11:46.350      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x26'
00:11:46.350     06:22:03	-- target/invalid.sh@25 -- # string+='&'
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.350       06:22:03	-- target/invalid.sh@25 -- # printf %x 113
00:11:46.350      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x71'
00:11:46.350     06:22:03	-- target/invalid.sh@25 -- # string+=q
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.350       06:22:03	-- target/invalid.sh@25 -- # printf %x 104
00:11:46.350      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x68'
00:11:46.350     06:22:03	-- target/invalid.sh@25 -- # string+=h
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.350       06:22:03	-- target/invalid.sh@25 -- # printf %x 83
00:11:46.350      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x53'
00:11:46.350     06:22:03	-- target/invalid.sh@25 -- # string+=S
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.350       06:22:03	-- target/invalid.sh@25 -- # printf %x 99
00:11:46.350      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x63'
00:11:46.350     06:22:03	-- target/invalid.sh@25 -- # string+=c
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.350       06:22:03	-- target/invalid.sh@25 -- # printf %x 98
00:11:46.350      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x62'
00:11:46.350     06:22:03	-- target/invalid.sh@25 -- # string+=b
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.350       06:22:03	-- target/invalid.sh@25 -- # printf %x 91
00:11:46.350      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x5b'
00:11:46.350     06:22:03	-- target/invalid.sh@25 -- # string+='['
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.350       06:22:03	-- target/invalid.sh@25 -- # printf %x 72
00:11:46.350      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x48'
00:11:46.350     06:22:03	-- target/invalid.sh@25 -- # string+=H
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.350       06:22:03	-- target/invalid.sh@25 -- # printf %x 56
00:11:46.350      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x38'
00:11:46.350     06:22:03	-- target/invalid.sh@25 -- # string+=8
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.350       06:22:03	-- target/invalid.sh@25 -- # printf %x 121
00:11:46.350      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x79'
00:11:46.350     06:22:03	-- target/invalid.sh@25 -- # string+=y
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.350       06:22:03	-- target/invalid.sh@25 -- # printf %x 92
00:11:46.350      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x5c'
00:11:46.350     06:22:03	-- target/invalid.sh@25 -- # string+='\'
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.350       06:22:03	-- target/invalid.sh@25 -- # printf %x 33
00:11:46.350      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x21'
00:11:46.350     06:22:03	-- target/invalid.sh@25 -- # string+='!'
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.350       06:22:03	-- target/invalid.sh@25 -- # printf %x 59
00:11:46.350      06:22:03	-- target/invalid.sh@25 -- # echo -e '\x3b'
00:11:46.350     06:22:03	-- target/invalid.sh@25 -- # string+=';'
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll++ ))
00:11:46.350     06:22:03	-- target/invalid.sh@24 -- # (( ll < length ))
00:11:46.350     06:22:03	-- target/invalid.sh@28 -- # [[ * == \- ]]
00:11:46.350     06:22:03	-- target/invalid.sh@31 -- # echo '*?6"Gv0?fNSjS>2Gsl:,G8?y;?w~&qhScb[H8y\!;'
00:11:46.350    06:22:03	-- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '*?6"Gv0?fNSjS>2Gsl:,G8?y;?w~&qhScb[H8y\!;' nqn.2016-06.io.spdk:cnode5293
00:11:46.608  [2024-12-16 06:22:03.477337] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5293: invalid model number '*?6"Gv0?fNSjS>2Gsl:,G8?y;?w~&qhScb[H8y\!;'
00:11:46.609   06:22:03	-- target/invalid.sh@58 -- # out='2024/12/16 06:22:03 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:*?6"Gv0?fNSjS>2Gsl:,G8?y;?w~&qhScb[H8y\!; nqn:nqn.2016-06.io.spdk:cnode5293], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN *?6"Gv0?fNSjS>2Gsl:,G8?y;?w~&qhScb[H8y\!;
00:11:46.609  request:
00:11:46.609  {
00:11:46.609    "method": "nvmf_create_subsystem",
00:11:46.609    "params": {
00:11:46.609      "nqn": "nqn.2016-06.io.spdk:cnode5293",
00:11:46.609      "model_number": "*?6\"Gv0?fNSjS>2Gsl:,G8?y;?w~&qhScb[H8y\\!;"
00:11:46.609    }
00:11:46.609  }
00:11:46.609  Got JSON-RPC error response
00:11:46.609  GoRPCClient: error on JSON-RPC call'
00:11:46.609   06:22:03	-- target/invalid.sh@59 -- # [[ 2024/12/16 06:22:03 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:*?6"Gv0?fNSjS>2Gsl:,G8?y;?w~&qhScb[H8y\!; nqn:nqn.2016-06.io.spdk:cnode5293], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN *?6"Gv0?fNSjS>2Gsl:,G8?y;?w~&qhScb[H8y\!;
00:11:46.609  request:
00:11:46.609  {
00:11:46.609    "method": "nvmf_create_subsystem",
00:11:46.609    "params": {
00:11:46.609      "nqn": "nqn.2016-06.io.spdk:cnode5293",
00:11:46.609      "model_number": "*?6\"Gv0?fNSjS>2Gsl:,G8?y;?w~&qhScb[H8y\\!;"
00:11:46.609    }
00:11:46.609  }
00:11:46.609  Got JSON-RPC error response
00:11:46.609  GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]]
00:11:46.609   06:22:03	-- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp
00:11:46.867  [2024-12-16 06:22:03.773658] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:11:46.867   06:22:03	-- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a
00:11:47.126   06:22:04	-- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]]
00:11:47.126    06:22:04	-- target/invalid.sh@67 -- # echo ''
00:11:47.126    06:22:04	-- target/invalid.sh@67 -- # head -n 1
00:11:47.126   06:22:04	-- target/invalid.sh@67 -- # IP=
00:11:47.126    06:22:04	-- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421
00:11:47.384  [2024-12-16 06:22:04.348213] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2
00:11:47.643   06:22:04	-- target/invalid.sh@69 -- # out='2024/12/16 06:22:04 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters
00:11:47.643  request:
00:11:47.643  {
00:11:47.643    "method": "nvmf_subsystem_remove_listener",
00:11:47.643    "params": {
00:11:47.643      "nqn": "nqn.2016-06.io.spdk:cnode",
00:11:47.643      "listen_address": {
00:11:47.643        "trtype": "tcp",
00:11:47.643        "traddr": "",
00:11:47.643        "trsvcid": "4421"
00:11:47.643      }
00:11:47.643    }
00:11:47.643  }
00:11:47.643  Got JSON-RPC error response
00:11:47.643  GoRPCClient: error on JSON-RPC call'
00:11:47.643   06:22:04	-- target/invalid.sh@70 -- # [[ 2024/12/16 06:22:04 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters
00:11:47.643  request:
00:11:47.643  {
00:11:47.643    "method": "nvmf_subsystem_remove_listener",
00:11:47.643    "params": {
00:11:47.643      "nqn": "nqn.2016-06.io.spdk:cnode",
00:11:47.643      "listen_address": {
00:11:47.643        "trtype": "tcp",
00:11:47.643        "traddr": "",
00:11:47.643        "trsvcid": "4421"
00:11:47.643      }
00:11:47.643    }
00:11:47.643  }
00:11:47.643  Got JSON-RPC error response
00:11:47.643  GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]]
00:11:47.643    06:22:04	-- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22869 -i 0
00:11:47.902  [2024-12-16 06:22:04.640400] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22869: invalid cntlid range [0-65519]
00:11:47.902   06:22:04	-- target/invalid.sh@73 -- # out='2024/12/16 06:22:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode22869], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519]
00:11:47.902  request:
00:11:47.902  {
00:11:47.902    "method": "nvmf_create_subsystem",
00:11:47.902    "params": {
00:11:47.902      "nqn": "nqn.2016-06.io.spdk:cnode22869",
00:11:47.902      "min_cntlid": 0
00:11:47.902    }
00:11:47.902  }
00:11:47.902  Got JSON-RPC error response
00:11:47.902  GoRPCClient: error on JSON-RPC call'
00:11:47.902   06:22:04	-- target/invalid.sh@74 -- # [[ 2024/12/16 06:22:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode22869], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519]
00:11:47.902  request:
00:11:47.902  {
00:11:47.902    "method": "nvmf_create_subsystem",
00:11:47.902    "params": {
00:11:47.902      "nqn": "nqn.2016-06.io.spdk:cnode22869",
00:11:47.902      "min_cntlid": 0
00:11:47.902    }
00:11:47.902  }
00:11:47.902  Got JSON-RPC error response
00:11:47.902  GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]]
00:11:47.902    06:22:04	-- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3904 -i 65520
00:11:47.902  [2024-12-16 06:22:04.864622] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3904: invalid cntlid range [65520-65519]
00:11:48.160   06:22:04	-- target/invalid.sh@75 -- # out='2024/12/16 06:22:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode3904], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519]
00:11:48.160  request:
00:11:48.160  {
00:11:48.160    "method": "nvmf_create_subsystem",
00:11:48.160    "params": {
00:11:48.160      "nqn": "nqn.2016-06.io.spdk:cnode3904",
00:11:48.160      "min_cntlid": 65520
00:11:48.160    }
00:11:48.160  }
00:11:48.160  Got JSON-RPC error response
00:11:48.160  GoRPCClient: error on JSON-RPC call'
00:11:48.160   06:22:04	-- target/invalid.sh@76 -- # [[ 2024/12/16 06:22:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode3904], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519]
00:11:48.160  request:
00:11:48.160  {
00:11:48.160    "method": "nvmf_create_subsystem",
00:11:48.160    "params": {
00:11:48.160      "nqn": "nqn.2016-06.io.spdk:cnode3904",
00:11:48.160      "min_cntlid": 65520
00:11:48.160    }
00:11:48.160  }
00:11:48.160  Got JSON-RPC error response
00:11:48.160  GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]]
00:11:48.160    06:22:04	-- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32713 -I 0
00:11:48.160  [2024-12-16 06:22:05.096892] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32713: invalid cntlid range [1-0]
00:11:48.160   06:22:05	-- target/invalid.sh@77 -- # out='2024/12/16 06:22:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode32713], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0]
00:11:48.160  request:
00:11:48.160  {
00:11:48.160    "method": "nvmf_create_subsystem",
00:11:48.160    "params": {
00:11:48.160      "nqn": "nqn.2016-06.io.spdk:cnode32713",
00:11:48.160      "max_cntlid": 0
00:11:48.160    }
00:11:48.160  }
00:11:48.160  Got JSON-RPC error response
00:11:48.160  GoRPCClient: error on JSON-RPC call'
00:11:48.160   06:22:05	-- target/invalid.sh@78 -- # [[ 2024/12/16 06:22:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode32713], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0]
00:11:48.160  request:
00:11:48.160  {
00:11:48.160    "method": "nvmf_create_subsystem",
00:11:48.160    "params": {
00:11:48.160      "nqn": "nqn.2016-06.io.spdk:cnode32713",
00:11:48.160      "max_cntlid": 0
00:11:48.160    }
00:11:48.160  }
00:11:48.160  Got JSON-RPC error response
00:11:48.160  GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]]
00:11:48.160    06:22:05	-- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13038 -I 65520
00:11:48.724  [2024-12-16 06:22:05.429223] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13038: invalid cntlid range [1-65520]
00:11:48.724   06:22:05	-- target/invalid.sh@79 -- # out='2024/12/16 06:22:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode13038], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520]
00:11:48.724  request:
00:11:48.724  {
00:11:48.724    "method": "nvmf_create_subsystem",
00:11:48.724    "params": {
00:11:48.724      "nqn": "nqn.2016-06.io.spdk:cnode13038",
00:11:48.724      "max_cntlid": 65520
00:11:48.724    }
00:11:48.724  }
00:11:48.724  Got JSON-RPC error response
00:11:48.724  GoRPCClient: error on JSON-RPC call'
00:11:48.724   06:22:05	-- target/invalid.sh@80 -- # [[ 2024/12/16 06:22:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode13038], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520]
00:11:48.724  request:
00:11:48.724  {
00:11:48.724    "method": "nvmf_create_subsystem",
00:11:48.724    "params": {
00:11:48.724      "nqn": "nqn.2016-06.io.spdk:cnode13038",
00:11:48.724      "max_cntlid": 65520
00:11:48.724    }
00:11:48.724  }
00:11:48.724  Got JSON-RPC error response
00:11:48.724  GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]]
00:11:48.724    06:22:05	-- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9968 -i 6 -I 5
00:11:48.982  [2024-12-16 06:22:05.785544] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9968: invalid cntlid range [6-5]
00:11:48.982   06:22:05	-- target/invalid.sh@83 -- # out='2024/12/16 06:22:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode9968], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5]
00:11:48.982  request:
00:11:48.982  {
00:11:48.982    "method": "nvmf_create_subsystem",
00:11:48.982    "params": {
00:11:48.982      "nqn": "nqn.2016-06.io.spdk:cnode9968",
00:11:48.982      "min_cntlid": 6,
00:11:48.982      "max_cntlid": 5
00:11:48.982    }
00:11:48.982  }
00:11:48.982  Got JSON-RPC error response
00:11:48.982  GoRPCClient: error on JSON-RPC call'
00:11:48.982   06:22:05	-- target/invalid.sh@84 -- # [[ 2024/12/16 06:22:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode9968], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5]
00:11:48.982  request:
00:11:48.982  {
00:11:48.982    "method": "nvmf_create_subsystem",
00:11:48.982    "params": {
00:11:48.982      "nqn": "nqn.2016-06.io.spdk:cnode9968",
00:11:48.982      "min_cntlid": 6,
00:11:48.982      "max_cntlid": 5
00:11:48.982    }
00:11:48.982  }
00:11:48.982  Got JSON-RPC error response
00:11:48.982  GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]]
00:11:48.982    06:22:05	-- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar
00:11:49.239   06:22:05	-- target/invalid.sh@87 -- # out='request:
00:11:49.239  {
00:11:49.239    "name": "foobar",
00:11:49.239    "method": "nvmf_delete_target",
00:11:49.239    "req_id": 1
00:11:49.239  }
00:11:49.239  Got JSON-RPC error response
00:11:49.239  response:
00:11:49.239  {
00:11:49.239    "code": -32602,
00:11:49.239    "message": "The specified target doesn'\''t exist, cannot delete it."
00:11:49.239  }'
00:11:49.239   06:22:05	-- target/invalid.sh@88 -- # [[ request:
00:11:49.239  {
00:11:49.239    "name": "foobar",
00:11:49.239    "method": "nvmf_delete_target",
00:11:49.239    "req_id": 1
00:11:49.239  }
00:11:49.239  Got JSON-RPC error response
00:11:49.239  response:
00:11:49.239  {
00:11:49.239    "code": -32602,
00:11:49.239    "message": "The specified target doesn't exist, cannot delete it."
00:11:49.239  } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]]
00:11:49.239   06:22:05	-- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT
00:11:49.239   06:22:05	-- target/invalid.sh@91 -- # nvmftestfini
00:11:49.239   06:22:05	-- nvmf/common.sh@476 -- # nvmfcleanup
00:11:49.239   06:22:05	-- nvmf/common.sh@116 -- # sync
00:11:49.239   06:22:05	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:11:49.239   06:22:05	-- nvmf/common.sh@119 -- # set +e
00:11:49.239   06:22:05	-- nvmf/common.sh@120 -- # for i in {1..20}
00:11:49.239   06:22:05	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:11:49.239  rmmod nvme_tcp
00:11:49.239  rmmod nvme_fabrics
00:11:49.239  rmmod nvme_keyring
00:11:49.239   06:22:06	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:11:49.239   06:22:06	-- nvmf/common.sh@123 -- # set -e
00:11:49.239   06:22:06	-- nvmf/common.sh@124 -- # return 0
00:11:49.239   06:22:06	-- nvmf/common.sh@477 -- # '[' -n 66474 ']'
00:11:49.239   06:22:06	-- nvmf/common.sh@478 -- # killprocess 66474
00:11:49.239   06:22:06	-- common/autotest_common.sh@936 -- # '[' -z 66474 ']'
00:11:49.239   06:22:06	-- common/autotest_common.sh@940 -- # kill -0 66474
00:11:49.239    06:22:06	-- common/autotest_common.sh@941 -- # uname
00:11:49.239   06:22:06	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:11:49.239    06:22:06	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66474
00:11:49.239   06:22:06	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:11:49.239   06:22:06	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:11:49.239   06:22:06	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 66474'
00:11:49.239  killing process with pid 66474
00:11:49.239   06:22:06	-- common/autotest_common.sh@955 -- # kill 66474
00:11:49.239   06:22:06	-- common/autotest_common.sh@960 -- # wait 66474
00:11:49.497   06:22:06	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:11:49.497   06:22:06	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:11:49.497   06:22:06	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:11:49.497   06:22:06	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:11:49.497   06:22:06	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:11:49.497   06:22:06	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:49.497   06:22:06	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:11:49.497    06:22:06	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:49.497   06:22:06	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:11:49.497  
00:11:49.497  real	0m6.261s
00:11:49.497  user	0m24.945s
00:11:49.497  sys	0m1.334s
00:11:49.497   06:22:06	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:49.497   06:22:06	-- common/autotest_common.sh@10 -- # set +x
00:11:49.497  ************************************
00:11:49.497  END TEST nvmf_invalid
00:11:49.497  ************************************
00:11:49.497   06:22:06	-- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp
00:11:49.497   06:22:06	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:11:49.497   06:22:06	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:49.497   06:22:06	-- common/autotest_common.sh@10 -- # set +x
00:11:49.497  ************************************
00:11:49.497  START TEST nvmf_abort
00:11:49.497  ************************************
00:11:49.497   06:22:06	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp
00:11:49.497  * Looking for test storage...
00:11:49.497  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:11:49.497    06:22:06	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:11:49.497     06:22:06	-- common/autotest_common.sh@1690 -- # lcov --version
00:11:49.497     06:22:06	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:11:49.755    06:22:06	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:11:49.755    06:22:06	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:11:49.755    06:22:06	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:11:49.755    06:22:06	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:11:49.755    06:22:06	-- scripts/common.sh@335 -- # IFS=.-:
00:11:49.755    06:22:06	-- scripts/common.sh@335 -- # read -ra ver1
00:11:49.755    06:22:06	-- scripts/common.sh@336 -- # IFS=.-:
00:11:49.755    06:22:06	-- scripts/common.sh@336 -- # read -ra ver2
00:11:49.755    06:22:06	-- scripts/common.sh@337 -- # local 'op=<'
00:11:49.755    06:22:06	-- scripts/common.sh@339 -- # ver1_l=2
00:11:49.755    06:22:06	-- scripts/common.sh@340 -- # ver2_l=1
00:11:49.755    06:22:06	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:11:49.755    06:22:06	-- scripts/common.sh@343 -- # case "$op" in
00:11:49.755    06:22:06	-- scripts/common.sh@344 -- # : 1
00:11:49.755    06:22:06	-- scripts/common.sh@363 -- # (( v = 0 ))
00:11:49.755    06:22:06	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:49.755     06:22:06	-- scripts/common.sh@364 -- # decimal 1
00:11:49.755     06:22:06	-- scripts/common.sh@352 -- # local d=1
00:11:49.755     06:22:06	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:49.755     06:22:06	-- scripts/common.sh@354 -- # echo 1
00:11:49.755    06:22:06	-- scripts/common.sh@364 -- # ver1[v]=1
00:11:49.755     06:22:06	-- scripts/common.sh@365 -- # decimal 2
00:11:49.755     06:22:06	-- scripts/common.sh@352 -- # local d=2
00:11:49.755     06:22:06	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:49.755     06:22:06	-- scripts/common.sh@354 -- # echo 2
00:11:49.755    06:22:06	-- scripts/common.sh@365 -- # ver2[v]=2
00:11:49.755    06:22:06	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:11:49.755    06:22:06	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:11:49.755    06:22:06	-- scripts/common.sh@367 -- # return 0
00:11:49.755    06:22:06	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:49.755    06:22:06	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:11:49.755  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:49.755  		--rc genhtml_branch_coverage=1
00:11:49.755  		--rc genhtml_function_coverage=1
00:11:49.755  		--rc genhtml_legend=1
00:11:49.755  		--rc geninfo_all_blocks=1
00:11:49.755  		--rc geninfo_unexecuted_blocks=1
00:11:49.755  		
00:11:49.755  		'
00:11:49.755    06:22:06	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:11:49.755  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:49.755  		--rc genhtml_branch_coverage=1
00:11:49.755  		--rc genhtml_function_coverage=1
00:11:49.755  		--rc genhtml_legend=1
00:11:49.755  		--rc geninfo_all_blocks=1
00:11:49.755  		--rc geninfo_unexecuted_blocks=1
00:11:49.755  		
00:11:49.755  		'
00:11:49.755    06:22:06	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:11:49.755  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:49.755  		--rc genhtml_branch_coverage=1
00:11:49.755  		--rc genhtml_function_coverage=1
00:11:49.755  		--rc genhtml_legend=1
00:11:49.755  		--rc geninfo_all_blocks=1
00:11:49.755  		--rc geninfo_unexecuted_blocks=1
00:11:49.755  		
00:11:49.755  		'
00:11:49.755    06:22:06	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:11:49.755  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:49.755  		--rc genhtml_branch_coverage=1
00:11:49.755  		--rc genhtml_function_coverage=1
00:11:49.755  		--rc genhtml_legend=1
00:11:49.755  		--rc geninfo_all_blocks=1
00:11:49.755  		--rc geninfo_unexecuted_blocks=1
00:11:49.755  		
00:11:49.755  		'
00:11:49.755   06:22:06	-- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:11:49.755     06:22:06	-- nvmf/common.sh@7 -- # uname -s
00:11:49.755    06:22:06	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:11:49.755    06:22:06	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:11:49.755    06:22:06	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:11:49.755    06:22:06	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:11:49.755    06:22:06	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:11:49.755    06:22:06	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:11:49.755    06:22:06	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:11:49.755    06:22:06	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:11:49.755    06:22:06	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:11:49.755     06:22:06	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:11:49.755    06:22:06	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:11:49.756    06:22:06	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:11:49.756    06:22:06	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:11:49.756    06:22:06	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:11:49.756    06:22:06	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:11:49.756    06:22:06	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:11:49.756     06:22:06	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:49.756     06:22:06	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:49.756     06:22:06	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:49.756      06:22:06	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:49.756      06:22:06	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:49.756      06:22:06	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:49.756      06:22:06	-- paths/export.sh@5 -- # export PATH
00:11:49.756      06:22:06	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:49.756    06:22:06	-- nvmf/common.sh@46 -- # : 0
00:11:49.756    06:22:06	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:11:49.756    06:22:06	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:11:49.756    06:22:06	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:11:49.756    06:22:06	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:11:49.756    06:22:06	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:11:49.756    06:22:06	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:11:49.756    06:22:06	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:11:49.756    06:22:06	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:11:49.756   06:22:06	-- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64
00:11:49.756   06:22:06	-- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096
00:11:49.756   06:22:06	-- target/abort.sh@14 -- # nvmftestinit
00:11:49.756   06:22:06	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:11:49.756   06:22:06	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:11:49.756   06:22:06	-- nvmf/common.sh@436 -- # prepare_net_devs
00:11:49.756   06:22:06	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:11:49.756   06:22:06	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:11:49.756   06:22:06	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:49.756   06:22:06	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:11:49.756    06:22:06	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:49.756   06:22:06	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:11:49.756   06:22:06	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:11:49.756   06:22:06	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:11:49.756   06:22:06	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:11:49.756   06:22:06	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:11:49.756   06:22:06	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:11:49.756   06:22:06	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:11:49.756   06:22:06	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:11:49.756   06:22:06	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:11:49.756   06:22:06	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:11:49.756   06:22:06	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:11:49.756   06:22:06	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:11:49.756   06:22:06	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:11:49.756   06:22:06	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:11:49.756   06:22:06	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:11:49.756   06:22:06	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:11:49.756   06:22:06	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:11:49.756   06:22:06	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:11:49.756   06:22:06	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:11:49.756   06:22:06	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:11:49.756  Cannot find device "nvmf_tgt_br"
00:11:49.756   06:22:06	-- nvmf/common.sh@154 -- # true
00:11:49.756   06:22:06	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:11:49.756  Cannot find device "nvmf_tgt_br2"
00:11:49.756   06:22:06	-- nvmf/common.sh@155 -- # true
00:11:49.756   06:22:06	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:11:49.756   06:22:06	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:11:49.756  Cannot find device "nvmf_tgt_br"
00:11:49.756   06:22:06	-- nvmf/common.sh@157 -- # true
00:11:49.756   06:22:06	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:11:49.756  Cannot find device "nvmf_tgt_br2"
00:11:49.756   06:22:06	-- nvmf/common.sh@158 -- # true
00:11:49.756   06:22:06	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:11:49.756   06:22:06	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:11:49.756   06:22:06	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:11:49.756  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:11:49.756   06:22:06	-- nvmf/common.sh@161 -- # true
00:11:49.756   06:22:06	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:11:49.756  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:11:49.756   06:22:06	-- nvmf/common.sh@162 -- # true
00:11:49.756   06:22:06	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:11:49.756   06:22:06	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:11:49.756   06:22:06	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:11:49.756   06:22:06	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:11:49.756   06:22:06	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:11:49.756   06:22:06	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:11:49.756   06:22:06	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:11:49.756   06:22:06	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:11:49.756   06:22:06	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:11:49.756   06:22:06	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:11:49.756   06:22:06	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:11:49.756   06:22:06	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:11:49.756   06:22:06	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:11:49.756   06:22:06	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:11:49.756   06:22:06	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:11:49.756   06:22:06	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:11:50.014   06:22:06	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:11:50.014   06:22:06	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:11:50.014   06:22:06	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:11:50.014   06:22:06	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:11:50.014   06:22:06	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:11:50.014   06:22:06	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:11:50.014   06:22:06	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:11:50.014   06:22:06	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:11:50.014  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:11:50.014  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms
00:11:50.014  
00:11:50.014  --- 10.0.0.2 ping statistics ---
00:11:50.014  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:50.014  rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms
00:11:50.014   06:22:06	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:11:50.014  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:11:50.014  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms
00:11:50.014  
00:11:50.014  --- 10.0.0.3 ping statistics ---
00:11:50.014  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:50.014  rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms
00:11:50.014   06:22:06	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:11:50.014  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:11:50.014  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms
00:11:50.014  
00:11:50.014  --- 10.0.0.1 ping statistics ---
00:11:50.014  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:50.014  rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms
00:11:50.014   06:22:06	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:11:50.014   06:22:06	-- nvmf/common.sh@421 -- # return 0
00:11:50.014   06:22:06	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:11:50.014   06:22:06	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:11:50.014   06:22:06	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:11:50.014   06:22:06	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:11:50.014   06:22:06	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:11:50.014   06:22:06	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:11:50.014   06:22:06	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:11:50.014   06:22:06	-- target/abort.sh@15 -- # nvmfappstart -m 0xE
00:11:50.014   06:22:06	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:11:50.014   06:22:06	-- common/autotest_common.sh@722 -- # xtrace_disable
00:11:50.014   06:22:06	-- common/autotest_common.sh@10 -- # set +x
00:11:50.014   06:22:06	-- nvmf/common.sh@469 -- # nvmfpid=66993
00:11:50.014   06:22:06	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:11:50.014   06:22:06	-- nvmf/common.sh@470 -- # waitforlisten 66993
00:11:50.014   06:22:06	-- common/autotest_common.sh@829 -- # '[' -z 66993 ']'
00:11:50.014   06:22:06	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:50.014   06:22:06	-- common/autotest_common.sh@834 -- # local max_retries=100
00:11:50.014  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:50.014   06:22:06	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:50.014   06:22:06	-- common/autotest_common.sh@838 -- # xtrace_disable
00:11:50.014   06:22:06	-- common/autotest_common.sh@10 -- # set +x
00:11:50.014  [2024-12-16 06:22:06.866000] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:11:50.014  [2024-12-16 06:22:06.866093] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:50.272  [2024-12-16 06:22:06.996404] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:11:50.272  [2024-12-16 06:22:07.126921] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:11:50.272  [2024-12-16 06:22:07.127062] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:11:50.272  [2024-12-16 06:22:07.127075] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:11:50.272  [2024-12-16 06:22:07.127084] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:11:50.272  [2024-12-16 06:22:07.127179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:11:50.272  [2024-12-16 06:22:07.127642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:11:50.272  [2024-12-16 06:22:07.127655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:11:51.203   06:22:07	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:11:51.203   06:22:07	-- common/autotest_common.sh@862 -- # return 0
00:11:51.203   06:22:07	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:11:51.203   06:22:07	-- common/autotest_common.sh@728 -- # xtrace_disable
00:11:51.203   06:22:07	-- common/autotest_common.sh@10 -- # set +x
00:11:51.203   06:22:07	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:11:51.203   06:22:07	-- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256
00:11:51.203   06:22:07	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:51.203   06:22:07	-- common/autotest_common.sh@10 -- # set +x
00:11:51.203  [2024-12-16 06:22:07.931295] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:11:51.203   06:22:07	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:51.203   06:22:07	-- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0
00:11:51.203   06:22:07	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:51.203   06:22:07	-- common/autotest_common.sh@10 -- # set +x
00:11:51.203  Malloc0
00:11:51.203   06:22:07	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:51.203   06:22:07	-- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:11:51.203   06:22:07	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:51.203   06:22:07	-- common/autotest_common.sh@10 -- # set +x
00:11:51.203  Delay0
00:11:51.203   06:22:07	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:51.203   06:22:07	-- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:11:51.203   06:22:07	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:51.203   06:22:07	-- common/autotest_common.sh@10 -- # set +x
00:11:51.203   06:22:07	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:51.203   06:22:07	-- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0
00:11:51.203   06:22:07	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:51.203   06:22:07	-- common/autotest_common.sh@10 -- # set +x
00:11:51.203   06:22:07	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:51.203   06:22:07	-- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:11:51.203   06:22:07	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:51.203   06:22:07	-- common/autotest_common.sh@10 -- # set +x
00:11:51.203  [2024-12-16 06:22:08.001902] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:51.203   06:22:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:51.203   06:22:08	-- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:11:51.203   06:22:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:51.203   06:22:08	-- common/autotest_common.sh@10 -- # set +x
00:11:51.203   06:22:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:51.203   06:22:08	-- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128
00:11:51.461  [2024-12-16 06:22:08.196346] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral
00:11:53.363  Initializing NVMe Controllers
00:11:53.363  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0
00:11:53.363  controller IO queue size 128 less than required
00:11:53.363  Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver.
00:11:53.363  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0
00:11:53.363  Initialization complete. Launching workers.
00:11:53.363  NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 31714
00:11:53.363  CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31779, failed to submit 62
00:11:53.363  	 success 31714, unsuccess 65, failed 0
00:11:53.363   06:22:10	-- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:11:53.363   06:22:10	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:53.363   06:22:10	-- common/autotest_common.sh@10 -- # set +x
00:11:53.363   06:22:10	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:53.363   06:22:10	-- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT
00:11:53.363   06:22:10	-- target/abort.sh@38 -- # nvmftestfini
00:11:53.363   06:22:10	-- nvmf/common.sh@476 -- # nvmfcleanup
00:11:53.363   06:22:10	-- nvmf/common.sh@116 -- # sync
00:11:53.621   06:22:10	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:11:53.621   06:22:10	-- nvmf/common.sh@119 -- # set +e
00:11:53.621   06:22:10	-- nvmf/common.sh@120 -- # for i in {1..20}
00:11:53.621   06:22:10	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:11:53.621  rmmod nvme_tcp
00:11:53.621  rmmod nvme_fabrics
00:11:53.621  rmmod nvme_keyring
00:11:53.621   06:22:10	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:11:53.621   06:22:10	-- nvmf/common.sh@123 -- # set -e
00:11:53.621   06:22:10	-- nvmf/common.sh@124 -- # return 0
00:11:53.621   06:22:10	-- nvmf/common.sh@477 -- # '[' -n 66993 ']'
00:11:53.621   06:22:10	-- nvmf/common.sh@478 -- # killprocess 66993
00:11:53.621   06:22:10	-- common/autotest_common.sh@936 -- # '[' -z 66993 ']'
00:11:53.621   06:22:10	-- common/autotest_common.sh@940 -- # kill -0 66993
00:11:53.621    06:22:10	-- common/autotest_common.sh@941 -- # uname
00:11:53.621   06:22:10	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:11:53.621    06:22:10	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66993
00:11:53.621   06:22:10	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:11:53.621  killing process with pid 66993
00:11:53.621   06:22:10	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:11:53.621   06:22:10	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 66993'
00:11:53.621   06:22:10	-- common/autotest_common.sh@955 -- # kill 66993
00:11:53.621   06:22:10	-- common/autotest_common.sh@960 -- # wait 66993
00:11:53.880   06:22:10	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:11:53.880   06:22:10	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:11:53.880   06:22:10	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:11:53.880   06:22:10	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:11:53.880   06:22:10	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:11:53.880   06:22:10	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:53.880   06:22:10	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:11:53.880    06:22:10	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:53.880   06:22:10	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:11:53.880  
00:11:53.880  real	0m4.340s
00:11:53.880  user	0m12.614s
00:11:53.880  sys	0m0.953s
00:11:53.880   06:22:10	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:53.880   06:22:10	-- common/autotest_common.sh@10 -- # set +x
00:11:53.880  ************************************
00:11:53.880  END TEST nvmf_abort
00:11:53.880  ************************************
00:11:53.880   06:22:10	-- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp
00:11:53.880   06:22:10	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:11:53.880   06:22:10	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:53.880   06:22:10	-- common/autotest_common.sh@10 -- # set +x
00:11:53.880  ************************************
00:11:53.880  START TEST nvmf_ns_hotplug_stress
00:11:53.880  ************************************
00:11:53.880   06:22:10	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp
00:11:53.880  * Looking for test storage...
00:11:53.880  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:11:53.880    06:22:10	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:11:53.880     06:22:10	-- common/autotest_common.sh@1690 -- # lcov --version
00:11:53.880     06:22:10	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:11:54.139    06:22:10	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:11:54.139    06:22:10	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:11:54.139    06:22:10	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:11:54.139    06:22:10	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:11:54.139    06:22:10	-- scripts/common.sh@335 -- # IFS=.-:
00:11:54.139    06:22:10	-- scripts/common.sh@335 -- # read -ra ver1
00:11:54.139    06:22:10	-- scripts/common.sh@336 -- # IFS=.-:
00:11:54.139    06:22:10	-- scripts/common.sh@336 -- # read -ra ver2
00:11:54.139    06:22:10	-- scripts/common.sh@337 -- # local 'op=<'
00:11:54.139    06:22:10	-- scripts/common.sh@339 -- # ver1_l=2
00:11:54.139    06:22:10	-- scripts/common.sh@340 -- # ver2_l=1
00:11:54.139    06:22:10	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:11:54.139    06:22:10	-- scripts/common.sh@343 -- # case "$op" in
00:11:54.139    06:22:10	-- scripts/common.sh@344 -- # : 1
00:11:54.139    06:22:10	-- scripts/common.sh@363 -- # (( v = 0 ))
00:11:54.139    06:22:10	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:54.139     06:22:10	-- scripts/common.sh@364 -- # decimal 1
00:11:54.139     06:22:10	-- scripts/common.sh@352 -- # local d=1
00:11:54.139     06:22:10	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:54.139     06:22:10	-- scripts/common.sh@354 -- # echo 1
00:11:54.139    06:22:10	-- scripts/common.sh@364 -- # ver1[v]=1
00:11:54.139     06:22:10	-- scripts/common.sh@365 -- # decimal 2
00:11:54.139     06:22:10	-- scripts/common.sh@352 -- # local d=2
00:11:54.139     06:22:10	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:54.139     06:22:10	-- scripts/common.sh@354 -- # echo 2
00:11:54.139    06:22:10	-- scripts/common.sh@365 -- # ver2[v]=2
00:11:54.139    06:22:10	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:11:54.139    06:22:10	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:11:54.139    06:22:10	-- scripts/common.sh@367 -- # return 0
00:11:54.139    06:22:10	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:54.139    06:22:10	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:11:54.139  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:54.139  		--rc genhtml_branch_coverage=1
00:11:54.139  		--rc genhtml_function_coverage=1
00:11:54.139  		--rc genhtml_legend=1
00:11:54.139  		--rc geninfo_all_blocks=1
00:11:54.139  		--rc geninfo_unexecuted_blocks=1
00:11:54.139  		
00:11:54.139  		'
00:11:54.139    06:22:10	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:11:54.139  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:54.139  		--rc genhtml_branch_coverage=1
00:11:54.139  		--rc genhtml_function_coverage=1
00:11:54.139  		--rc genhtml_legend=1
00:11:54.139  		--rc geninfo_all_blocks=1
00:11:54.139  		--rc geninfo_unexecuted_blocks=1
00:11:54.139  		
00:11:54.139  		'
00:11:54.139    06:22:10	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:11:54.139  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:54.139  		--rc genhtml_branch_coverage=1
00:11:54.139  		--rc genhtml_function_coverage=1
00:11:54.139  		--rc genhtml_legend=1
00:11:54.139  		--rc geninfo_all_blocks=1
00:11:54.139  		--rc geninfo_unexecuted_blocks=1
00:11:54.139  		
00:11:54.139  		'
00:11:54.139    06:22:10	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:11:54.139  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:54.139  		--rc genhtml_branch_coverage=1
00:11:54.139  		--rc genhtml_function_coverage=1
00:11:54.139  		--rc genhtml_legend=1
00:11:54.139  		--rc geninfo_all_blocks=1
00:11:54.139  		--rc geninfo_unexecuted_blocks=1
00:11:54.139  		
00:11:54.139  		'
00:11:54.139   06:22:10	-- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:11:54.139     06:22:10	-- nvmf/common.sh@7 -- # uname -s
00:11:54.139    06:22:10	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:11:54.139    06:22:10	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:11:54.139    06:22:10	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:11:54.139    06:22:10	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:11:54.139    06:22:10	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:11:54.139    06:22:10	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:11:54.139    06:22:10	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:11:54.139    06:22:10	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:11:54.139    06:22:10	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:11:54.139     06:22:10	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:11:54.139    06:22:10	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:11:54.139    06:22:10	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:11:54.139    06:22:10	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:11:54.139    06:22:10	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:11:54.139    06:22:10	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:11:54.139    06:22:10	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:11:54.139     06:22:10	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:54.139     06:22:10	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:54.139     06:22:10	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:54.139      06:22:10	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:54.140      06:22:10	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:54.140      06:22:10	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:54.140      06:22:10	-- paths/export.sh@5 -- # export PATH
00:11:54.140      06:22:10	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:54.140    06:22:10	-- nvmf/common.sh@46 -- # : 0
00:11:54.140    06:22:10	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:11:54.140    06:22:10	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:11:54.140    06:22:10	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:11:54.140    06:22:10	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:11:54.140    06:22:10	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:11:54.140    06:22:10	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:11:54.140    06:22:10	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:11:54.140    06:22:10	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:11:54.140   06:22:10	-- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:11:54.140   06:22:10	-- target/ns_hotplug_stress.sh@22 -- # nvmftestinit
00:11:54.140   06:22:10	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:11:54.140   06:22:10	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:11:54.140   06:22:10	-- nvmf/common.sh@436 -- # prepare_net_devs
00:11:54.140   06:22:10	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:11:54.140   06:22:10	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:11:54.140   06:22:10	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:54.140   06:22:10	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:11:54.140    06:22:10	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:54.140   06:22:10	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:11:54.140   06:22:10	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:11:54.140   06:22:10	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:11:54.140   06:22:10	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:11:54.140   06:22:10	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:11:54.140   06:22:10	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:11:54.140   06:22:10	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:11:54.140   06:22:10	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:11:54.140   06:22:10	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:11:54.140   06:22:10	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:11:54.140   06:22:10	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:11:54.140   06:22:10	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:11:54.140   06:22:10	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:11:54.140   06:22:10	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:11:54.140   06:22:10	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:11:54.140   06:22:10	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:11:54.140   06:22:10	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:11:54.140   06:22:10	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:11:54.140   06:22:10	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:11:54.140   06:22:10	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:11:54.140  Cannot find device "nvmf_tgt_br"
00:11:54.140   06:22:10	-- nvmf/common.sh@154 -- # true
00:11:54.140   06:22:10	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:11:54.140  Cannot find device "nvmf_tgt_br2"
00:11:54.140   06:22:10	-- nvmf/common.sh@155 -- # true
00:11:54.140   06:22:10	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:11:54.140   06:22:10	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:11:54.140  Cannot find device "nvmf_tgt_br"
00:11:54.140   06:22:10	-- nvmf/common.sh@157 -- # true
00:11:54.140   06:22:10	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:11:54.140  Cannot find device "nvmf_tgt_br2"
00:11:54.140   06:22:11	-- nvmf/common.sh@158 -- # true
00:11:54.140   06:22:11	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:11:54.140   06:22:11	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:11:54.140   06:22:11	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:11:54.140  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:11:54.140   06:22:11	-- nvmf/common.sh@161 -- # true
00:11:54.140   06:22:11	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:11:54.140  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:11:54.140   06:22:11	-- nvmf/common.sh@162 -- # true
00:11:54.140   06:22:11	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:11:54.140   06:22:11	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:11:54.140   06:22:11	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:11:54.140   06:22:11	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:11:54.140   06:22:11	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:11:54.140   06:22:11	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:11:54.399   06:22:11	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:11:54.399   06:22:11	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:11:54.399   06:22:11	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:11:54.399   06:22:11	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:11:54.399   06:22:11	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:11:54.399   06:22:11	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:11:54.399   06:22:11	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:11:54.399   06:22:11	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:11:54.399   06:22:11	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:11:54.399   06:22:11	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:11:54.399   06:22:11	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:11:54.399   06:22:11	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:11:54.399   06:22:11	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:11:54.399   06:22:11	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:11:54.399   06:22:11	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:11:54.399   06:22:11	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:11:54.399   06:22:11	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:11:54.399   06:22:11	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:11:54.399  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:11:54.399  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms
00:11:54.399  
00:11:54.399  --- 10.0.0.2 ping statistics ---
00:11:54.399  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:54.399  rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms
00:11:54.399   06:22:11	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:11:54.399  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:11:54.399  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms
00:11:54.399  
00:11:54.399  --- 10.0.0.3 ping statistics ---
00:11:54.399  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:54.399  rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms
00:11:54.399   06:22:11	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:11:54.399  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:11:54.399  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms
00:11:54.399  
00:11:54.399  --- 10.0.0.1 ping statistics ---
00:11:54.399  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:54.399  rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms
00:11:54.399   06:22:11	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:11:54.399   06:22:11	-- nvmf/common.sh@421 -- # return 0
00:11:54.399   06:22:11	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:11:54.399   06:22:11	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:11:54.399   06:22:11	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:11:54.399   06:22:11	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:11:54.399   06:22:11	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:11:54.399   06:22:11	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:11:54.399   06:22:11	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:11:54.399   06:22:11	-- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE
00:11:54.399   06:22:11	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:11:54.399   06:22:11	-- common/autotest_common.sh@722 -- # xtrace_disable
00:11:54.399   06:22:11	-- common/autotest_common.sh@10 -- # set +x
00:11:54.399   06:22:11	-- nvmf/common.sh@469 -- # nvmfpid=67263
00:11:54.399   06:22:11	-- nvmf/common.sh@470 -- # waitforlisten 67263
00:11:54.399   06:22:11	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:11:54.399   06:22:11	-- common/autotest_common.sh@829 -- # '[' -z 67263 ']'
00:11:54.399   06:22:11	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:54.399   06:22:11	-- common/autotest_common.sh@834 -- # local max_retries=100
00:11:54.399  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:54.400   06:22:11	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:54.400   06:22:11	-- common/autotest_common.sh@838 -- # xtrace_disable
00:11:54.400   06:22:11	-- common/autotest_common.sh@10 -- # set +x
00:11:54.400  [2024-12-16 06:22:11.288362] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:11:54.400  [2024-12-16 06:22:11.288449] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:54.659  [2024-12-16 06:22:11.418471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:11:54.659  [2024-12-16 06:22:11.507229] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:11:54.659  [2024-12-16 06:22:11.507573] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:11:54.659  [2024-12-16 06:22:11.507621] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:11:54.659  [2024-12-16 06:22:11.507756] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:11:54.659  [2024-12-16 06:22:11.507907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:11:54.659  [2024-12-16 06:22:11.508148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:11:54.659  [2024-12-16 06:22:11.508152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:11:55.594   06:22:12	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:11:55.594   06:22:12	-- common/autotest_common.sh@862 -- # return 0
00:11:55.594   06:22:12	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:11:55.594   06:22:12	-- common/autotest_common.sh@728 -- # xtrace_disable
00:11:55.594   06:22:12	-- common/autotest_common.sh@10 -- # set +x
00:11:55.594   06:22:12	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:11:55.594   06:22:12	-- target/ns_hotplug_stress.sh@25 -- # null_size=1000
00:11:55.594   06:22:12	-- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:11:55.852  [2024-12-16 06:22:12.590657] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:11:55.852   06:22:12	-- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:11:56.111   06:22:12	-- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:56.111  [2024-12-16 06:22:13.072730] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:56.369   06:22:13	-- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:11:56.626   06:22:13	-- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0
00:11:56.884  Malloc0
00:11:56.884   06:22:13	-- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:11:57.141  Delay0
00:11:57.141   06:22:14	-- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:11:57.398   06:22:14	-- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512
00:11:57.657  NULL1
00:11:57.657   06:22:14	-- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1
00:11:57.915   06:22:14	-- target/ns_hotplug_stress.sh@42 -- # PERF_PID=67401
00:11:57.915   06:22:14	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:11:57.915   06:22:14	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:11:57.915   06:22:14	-- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000
00:11:59.288  Read completed with error (sct=0, sc=11)
00:11:59.288   06:22:15	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:11:59.288  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:11:59.288  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:11:59.288  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:11:59.288  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:11:59.288  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:11:59.288  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:11:59.288   06:22:16	-- target/ns_hotplug_stress.sh@49 -- # null_size=1001
00:11:59.288   06:22:16	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001
00:11:59.547  true
00:11:59.547   06:22:16	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:11:59.547   06:22:16	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:00.483   06:22:17	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:00.483   06:22:17	-- target/ns_hotplug_stress.sh@49 -- # null_size=1002
00:12:00.483   06:22:17	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002
00:12:00.742  true
00:12:00.742   06:22:17	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:00.742   06:22:17	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:01.000   06:22:17	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:01.259   06:22:18	-- target/ns_hotplug_stress.sh@49 -- # null_size=1003
00:12:01.259   06:22:18	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003
00:12:01.538  true
00:12:01.538   06:22:18	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:01.538   06:22:18	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:02.474   06:22:19	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:02.732   06:22:19	-- target/ns_hotplug_stress.sh@49 -- # null_size=1004
00:12:02.732   06:22:19	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004
00:12:02.732  true
00:12:02.990   06:22:19	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:02.990   06:22:19	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:02.990   06:22:19	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:03.248   06:22:20	-- target/ns_hotplug_stress.sh@49 -- # null_size=1005
00:12:03.248   06:22:20	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005
00:12:03.506  true
00:12:03.764   06:22:20	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:03.764   06:22:20	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:04.023   06:22:20	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:04.281   06:22:21	-- target/ns_hotplug_stress.sh@49 -- # null_size=1006
00:12:04.281   06:22:21	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006
00:12:04.281  true
00:12:04.281   06:22:21	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:04.281   06:22:21	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:05.217   06:22:22	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:05.475   06:22:22	-- target/ns_hotplug_stress.sh@49 -- # null_size=1007
00:12:05.475   06:22:22	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007
00:12:05.734  true
00:12:05.734   06:22:22	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:05.734   06:22:22	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:05.995   06:22:22	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:06.253   06:22:23	-- target/ns_hotplug_stress.sh@49 -- # null_size=1008
00:12:06.253   06:22:23	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008
00:12:06.512  true
00:12:06.512   06:22:23	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:06.512   06:22:23	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:07.449   06:22:24	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:07.449  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:12:07.449   06:22:24	-- target/ns_hotplug_stress.sh@49 -- # null_size=1009
00:12:07.449   06:22:24	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009
00:12:07.708  true
00:12:07.708   06:22:24	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:07.708   06:22:24	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:07.966   06:22:24	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:08.225   06:22:25	-- target/ns_hotplug_stress.sh@49 -- # null_size=1010
00:12:08.225   06:22:25	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010
00:12:08.483  true
00:12:08.483   06:22:25	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:08.483   06:22:25	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:09.418   06:22:26	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:09.676   06:22:26	-- target/ns_hotplug_stress.sh@49 -- # null_size=1011
00:12:09.676   06:22:26	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011
00:12:09.935  true
00:12:09.935   06:22:26	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:09.935   06:22:26	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:10.192   06:22:26	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:10.450   06:22:27	-- target/ns_hotplug_stress.sh@49 -- # null_size=1012
00:12:10.450   06:22:27	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012
00:12:10.707  true
00:12:10.707   06:22:27	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:10.707   06:22:27	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:10.964   06:22:27	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:11.528   06:22:28	-- target/ns_hotplug_stress.sh@49 -- # null_size=1013
00:12:11.528   06:22:28	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013
00:12:11.786  true
00:12:11.786   06:22:28	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:11.786   06:22:28	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:12.044   06:22:28	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:12.301   06:22:29	-- target/ns_hotplug_stress.sh@49 -- # null_size=1014
00:12:12.301   06:22:29	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014
00:12:12.558  true
00:12:12.558   06:22:29	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:12.558   06:22:29	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:13.490   06:22:30	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:13.748   06:22:30	-- target/ns_hotplug_stress.sh@49 -- # null_size=1015
00:12:13.748   06:22:30	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015
00:12:13.748  true
00:12:13.748   06:22:30	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:13.748   06:22:30	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:14.006   06:22:30	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:14.264   06:22:31	-- target/ns_hotplug_stress.sh@49 -- # null_size=1016
00:12:14.264   06:22:31	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016
00:12:14.532  true
00:12:14.532   06:22:31	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:14.532   06:22:31	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:15.484   06:22:32	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:15.742   06:22:32	-- target/ns_hotplug_stress.sh@49 -- # null_size=1017
00:12:15.742   06:22:32	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017
00:12:15.742  true
00:12:15.742   06:22:32	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:15.742   06:22:32	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:16.000   06:22:32	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:16.258   06:22:33	-- target/ns_hotplug_stress.sh@49 -- # null_size=1018
00:12:16.258   06:22:33	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018
00:12:16.515  true
00:12:16.515   06:22:33	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:16.515   06:22:33	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:17.448   06:22:34	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:17.707   06:22:34	-- target/ns_hotplug_stress.sh@49 -- # null_size=1019
00:12:17.707   06:22:34	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019
00:12:17.965  true
00:12:17.965   06:22:34	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:17.965   06:22:34	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:18.222   06:22:34	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:18.222   06:22:35	-- target/ns_hotplug_stress.sh@49 -- # null_size=1020
00:12:18.222   06:22:35	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020
00:12:18.480  true
00:12:18.480   06:22:35	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:18.480   06:22:35	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:19.413   06:22:36	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:19.671   06:22:36	-- target/ns_hotplug_stress.sh@49 -- # null_size=1021
00:12:19.671   06:22:36	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021
00:12:19.929  true
00:12:19.929   06:22:36	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:19.929   06:22:36	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:20.186   06:22:36	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:20.444   06:22:37	-- target/ns_hotplug_stress.sh@49 -- # null_size=1022
00:12:20.444   06:22:37	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022
00:12:20.702  true
00:12:20.702   06:22:37	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:20.702   06:22:37	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:21.636   06:22:38	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:21.636   06:22:38	-- target/ns_hotplug_stress.sh@49 -- # null_size=1023
00:12:21.636   06:22:38	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023
00:12:21.894  true
00:12:21.894   06:22:38	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:21.894   06:22:38	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:22.152   06:22:39	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:22.411   06:22:39	-- target/ns_hotplug_stress.sh@49 -- # null_size=1024
00:12:22.411   06:22:39	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024
00:12:22.669  true
00:12:22.669   06:22:39	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:22.669   06:22:39	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:22.927   06:22:39	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:23.184   06:22:39	-- target/ns_hotplug_stress.sh@49 -- # null_size=1025
00:12:23.184   06:22:39	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025
00:12:23.442  true
00:12:23.442   06:22:40	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:23.442   06:22:40	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:24.376  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:12:24.376   06:22:41	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:24.376  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:12:24.376  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:12:24.634  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:12:24.634  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:12:24.634  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:12:24.634  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:12:24.634   06:22:41	-- target/ns_hotplug_stress.sh@49 -- # null_size=1026
00:12:24.634   06:22:41	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026
00:12:24.892  true
00:12:24.892   06:22:41	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:24.892   06:22:41	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:25.826   06:22:42	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:26.084   06:22:42	-- target/ns_hotplug_stress.sh@49 -- # null_size=1027
00:12:26.084   06:22:42	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027
00:12:26.084  true
00:12:26.084   06:22:43	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:26.084   06:22:43	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:26.356   06:22:43	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:26.632   06:22:43	-- target/ns_hotplug_stress.sh@49 -- # null_size=1028
00:12:26.632   06:22:43	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028
00:12:26.890  true
00:12:26.890   06:22:43	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:26.890   06:22:43	-- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:27.825   06:22:44	-- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:27.825   06:22:44	-- target/ns_hotplug_stress.sh@49 -- # null_size=1029
00:12:27.825   06:22:44	-- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029
00:12:28.083  Initializing NVMe Controllers
00:12:28.083  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:12:28.083  Controller IO queue size 128, less than required.
00:12:28.083  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:12:28.083  Controller IO queue size 128, less than required.
00:12:28.083  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:12:28.084  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:12:28.084  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:12:28.084  Initialization complete. Launching workers.
00:12:28.084  ========================================================
00:12:28.084                                                                                                               Latency(us)
00:12:28.084  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:12:28.084  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:     555.14       0.27  117433.88    2402.36 1025728.41
00:12:28.084  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:   12272.80       5.99   10429.64    2899.59  590252.43
00:12:28.084  ========================================================
00:12:28.084  Total                                                                    :   12827.94       6.26   15060.32    2402.36 1025728.41
00:12:28.084  
00:12:28.084  true
00:12:28.084   06:22:44	-- target/ns_hotplug_stress.sh@44 -- # kill -0 67401
00:12:28.084  /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (67401) - No such process
00:12:28.084   06:22:44	-- target/ns_hotplug_stress.sh@53 -- # wait 67401
00:12:28.084   06:22:44	-- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:28.342   06:22:45	-- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:12:28.600   06:22:45	-- target/ns_hotplug_stress.sh@58 -- # nthreads=8
00:12:28.600   06:22:45	-- target/ns_hotplug_stress.sh@58 -- # pids=()
00:12:28.600   06:22:45	-- target/ns_hotplug_stress.sh@59 -- # (( i = 0 ))
00:12:28.600   06:22:45	-- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:12:28.600   06:22:45	-- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096
00:12:28.858  null0
00:12:28.858   06:22:45	-- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:12:28.858   06:22:45	-- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:12:28.858   06:22:45	-- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096
00:12:29.115  null1
00:12:29.115   06:22:45	-- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:12:29.115   06:22:45	-- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:12:29.115   06:22:45	-- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096
00:12:29.373  null2
00:12:29.373   06:22:46	-- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:12:29.373   06:22:46	-- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:12:29.373   06:22:46	-- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096
00:12:29.632  null3
00:12:29.632   06:22:46	-- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:12:29.632   06:22:46	-- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:12:29.632   06:22:46	-- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096
00:12:29.890  null4
00:12:29.890   06:22:46	-- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:12:29.890   06:22:46	-- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:12:29.890   06:22:46	-- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096
00:12:29.890  null5
00:12:30.148   06:22:46	-- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:12:30.148   06:22:46	-- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:12:30.148   06:22:46	-- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096
00:12:30.148  null6
00:12:30.148   06:22:47	-- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:12:30.148   06:22:47	-- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:12:30.148   06:22:47	-- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096
00:12:30.407  null7
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@62 -- # (( i = 0 ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@66 -- # wait 68426 68427 68430 68431 68433 68435 68438 68439
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:30.407   06:22:47	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:12:30.665   06:22:47	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:12:30.665   06:22:47	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:12:30.665   06:22:47	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:12:30.665   06:22:47	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:12:30.665   06:22:47	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:12:30.924   06:22:47	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:30.924   06:22:47	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:12:30.924   06:22:47	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:12:30.924   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:30.924   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:30.924   06:22:47	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:12:30.924   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:30.924   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:30.924   06:22:47	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:12:30.924   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:30.924   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:30.924   06:22:47	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:12:30.924   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:30.924   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:30.924   06:22:47	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:12:30.924   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:30.924   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:30.924   06:22:47	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:12:31.183   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:31.183   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:31.183   06:22:47	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:12:31.183   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:31.183   06:22:47	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:31.183   06:22:47	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:12:31.183   06:22:48	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:31.183   06:22:48	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:31.183   06:22:48	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:12:31.183   06:22:48	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:12:31.183   06:22:48	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:12:31.183   06:22:48	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:12:31.183   06:22:48	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:12:31.183   06:22:48	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:12:31.442   06:22:48	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:31.442   06:22:48	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:12:31.442   06:22:48	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:12:31.442   06:22:48	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:31.442   06:22:48	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:31.442   06:22:48	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:12:31.442   06:22:48	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:31.442   06:22:48	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:31.442   06:22:48	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:12:31.442   06:22:48	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:31.442   06:22:48	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:31.442   06:22:48	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:12:31.442   06:22:48	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:31.442   06:22:48	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:31.442   06:22:48	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:12:31.701   06:22:48	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:31.701   06:22:48	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:31.701   06:22:48	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:12:31.701   06:22:48	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:31.701   06:22:48	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:31.701   06:22:48	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:12:31.701   06:22:48	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:31.701   06:22:48	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:31.701   06:22:48	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:12:31.701   06:22:48	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:31.701   06:22:48	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:31.701   06:22:48	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:12:31.701   06:22:48	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:12:31.701   06:22:48	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:12:31.701   06:22:48	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:12:31.959   06:22:48	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:12:31.959   06:22:48	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:12:31.959   06:22:48	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:31.959   06:22:48	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:12:31.959   06:22:48	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:12:31.959   06:22:48	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:31.959   06:22:48	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:31.959   06:22:48	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:12:31.959   06:22:48	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:31.959   06:22:48	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:31.959   06:22:48	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:12:32.218   06:22:48	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:32.218   06:22:48	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:32.218   06:22:48	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:12:32.218   06:22:48	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:32.218   06:22:48	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:32.218   06:22:48	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:12:32.218   06:22:49	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:32.218   06:22:49	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:32.218   06:22:49	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:12:32.218   06:22:49	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:32.218   06:22:49	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:32.218   06:22:49	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:12:32.218   06:22:49	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:32.218   06:22:49	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:32.218   06:22:49	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:12:32.218   06:22:49	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:32.218   06:22:49	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:32.218   06:22:49	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:12:32.218   06:22:49	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:12:32.218   06:22:49	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:12:32.476   06:22:49	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:12:32.476   06:22:49	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:12:32.476   06:22:49	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:12:32.476   06:22:49	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:12:32.476   06:22:49	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:12:32.476   06:22:49	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:32.476   06:22:49	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:32.476   06:22:49	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:12:32.735   06:22:49	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:32.735   06:22:49	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:32.735   06:22:49	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:12:32.735   06:22:49	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:32.735   06:22:49	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:32.735   06:22:49	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:32.735   06:22:49	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:12:32.735   06:22:49	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:32.735   06:22:49	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:32.735   06:22:49	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:32.735   06:22:49	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:32.735   06:22:49	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:12:32.735   06:22:49	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:12:32.735   06:22:49	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:32.735   06:22:49	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:32.735   06:22:49	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:12:32.735   06:22:49	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:32.735   06:22:49	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:32.735   06:22:49	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:12:32.735   06:22:49	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:12:32.994   06:22:49	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:12:32.994   06:22:49	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:32.994   06:22:49	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:32.994   06:22:49	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:12:32.994   06:22:49	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:12:32.994   06:22:49	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:12:32.994   06:22:49	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:12:32.994   06:22:49	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:12:32.994   06:22:49	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:32.994   06:22:49	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:32.994   06:22:49	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:12:33.253   06:22:49	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:12:33.253   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:33.253   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:33.253   06:22:50	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:12:33.253   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:33.253   06:22:50	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:33.253   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:33.253   06:22:50	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:12:33.253   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:33.253   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:33.253   06:22:50	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:12:33.253   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:33.253   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:33.253   06:22:50	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:12:33.253   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:33.253   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:33.253   06:22:50	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:12:33.253   06:22:50	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:12:33.511   06:22:50	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:12:33.511   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:33.511   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:33.511   06:22:50	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:12:33.511   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:33.511   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:33.511   06:22:50	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:12:33.511   06:22:50	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:12:33.511   06:22:50	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:12:33.511   06:22:50	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:12:33.511   06:22:50	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:12:33.770   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:33.770   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:33.770   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:33.770   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:33.770   06:22:50	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:12:33.770   06:22:50	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:12:33.770   06:22:50	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:33.770   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:33.770   06:22:50	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:12:33.770   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:33.770   06:22:50	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:12:33.770   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:33.770   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:33.770   06:22:50	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:12:33.770   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:33.770   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:33.770   06:22:50	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:12:33.770   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:33.770   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:33.770   06:22:50	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:12:34.028   06:22:50	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:12:34.028   06:22:50	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:12:34.028   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:34.028   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:34.028   06:22:50	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:12:34.028   06:22:50	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:12:34.028   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:34.028   06:22:50	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:34.028   06:22:50	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:12:34.028   06:22:50	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:12:34.028   06:22:50	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:12:34.028   06:22:50	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:12:34.287   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:34.287   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:34.287   06:22:51	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:12:34.287   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:34.287   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:34.287   06:22:51	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:12:34.287   06:22:51	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:34.287   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:34.287   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:34.287   06:22:51	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:12:34.287   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:34.287   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:34.287   06:22:51	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:12:34.287   06:22:51	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:12:34.287   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:34.287   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:34.287   06:22:51	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:12:34.287   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:34.287   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:34.287   06:22:51	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:12:34.545   06:22:51	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:12:34.545   06:22:51	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:12:34.545   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:34.545   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:34.545   06:22:51	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:12:34.545   06:22:51	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:12:34.545   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:34.545   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:34.545   06:22:51	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:12:34.545   06:22:51	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:12:34.804   06:22:51	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:12:34.804   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:34.804   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:34.804   06:22:51	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:12:34.804   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:34.804   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:34.804   06:22:51	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:12:34.804   06:22:51	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:12:34.804   06:22:51	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:34.804   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:34.804   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:34.804   06:22:51	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:12:34.804   06:22:51	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:12:34.804   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:34.804   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:34.804   06:22:51	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:12:35.063   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:35.063   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:35.063   06:22:51	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:12:35.063   06:22:51	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:12:35.063   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:35.063   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:35.063   06:22:51	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:12:35.063   06:22:51	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:12:35.063   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:35.063   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:35.063   06:22:51	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:12:35.063   06:22:51	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:12:35.063   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:35.063   06:22:51	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:35.063   06:22:51	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:12:35.063   06:22:52	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:12:35.321   06:22:52	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:12:35.321   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:35.321   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:35.322   06:22:52	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:12:35.322   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:35.322   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:35.322   06:22:52	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:12:35.322   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:35.322   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:35.322   06:22:52	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:12:35.322   06:22:52	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:12:35.322   06:22:52	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:35.580   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:35.580   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:35.580   06:22:52	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:12:35.580   06:22:52	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:12:35.580   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:35.580   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:35.580   06:22:52	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:12:35.580   06:22:52	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:12:35.580   06:22:52	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:12:35.580   06:22:52	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:12:35.580   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:35.580   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:35.580   06:22:52	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:12:35.580   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:35.580   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:35.580   06:22:52	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:12:35.839   06:22:52	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:12:35.839   06:22:52	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:12:35.839   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:35.839   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:35.839   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:35.839   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:35.839   06:22:52	-- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:12:35.839   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:35.839   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:35.839   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:35.839   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:35.839   06:22:52	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:12:35.839   06:22:52	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:12:36.098   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:36.098   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:36.098   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:36.098   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:36.098   06:22:52	-- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:12:36.098   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:36.098   06:22:52	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:36.098   06:22:53	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:36.098   06:22:53	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:36.357   06:22:53	-- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:12:36.357   06:22:53	-- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:12:36.357   06:22:53	-- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT
00:12:36.357   06:22:53	-- target/ns_hotplug_stress.sh@70 -- # nvmftestfini
00:12:36.357   06:22:53	-- nvmf/common.sh@476 -- # nvmfcleanup
00:12:36.357   06:22:53	-- nvmf/common.sh@116 -- # sync
00:12:36.357   06:22:53	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:12:36.357   06:22:53	-- nvmf/common.sh@119 -- # set +e
00:12:36.357   06:22:53	-- nvmf/common.sh@120 -- # for i in {1..20}
00:12:36.357   06:22:53	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:12:36.357  rmmod nvme_tcp
00:12:36.357  rmmod nvme_fabrics
00:12:36.357  rmmod nvme_keyring
00:12:36.357   06:22:53	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:12:36.357   06:22:53	-- nvmf/common.sh@123 -- # set -e
00:12:36.357   06:22:53	-- nvmf/common.sh@124 -- # return 0
00:12:36.357   06:22:53	-- nvmf/common.sh@477 -- # '[' -n 67263 ']'
00:12:36.357   06:22:53	-- nvmf/common.sh@478 -- # killprocess 67263
00:12:36.357   06:22:53	-- common/autotest_common.sh@936 -- # '[' -z 67263 ']'
00:12:36.357   06:22:53	-- common/autotest_common.sh@940 -- # kill -0 67263
00:12:36.357    06:22:53	-- common/autotest_common.sh@941 -- # uname
00:12:36.357   06:22:53	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:12:36.357    06:22:53	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67263
00:12:36.357   06:22:53	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:12:36.357   06:22:53	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:12:36.357  killing process with pid 67263
00:12:36.357   06:22:53	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 67263'
00:12:36.357   06:22:53	-- common/autotest_common.sh@955 -- # kill 67263
00:12:36.357   06:22:53	-- common/autotest_common.sh@960 -- # wait 67263
00:12:36.616   06:22:53	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:12:36.616   06:22:53	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:12:36.616   06:22:53	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:12:36.616   06:22:53	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:12:36.616   06:22:53	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:12:36.616   06:22:53	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:12:36.616   06:22:53	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:12:36.616    06:22:53	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:12:36.876   06:22:53	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:12:36.876  
00:12:36.876  real	0m42.857s
00:12:36.876  user	3m27.022s
00:12:36.876  sys	0m12.200s
00:12:36.876   06:22:53	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:36.876  ************************************
00:12:36.876  END TEST nvmf_ns_hotplug_stress
00:12:36.876  ************************************
00:12:36.876   06:22:53	-- common/autotest_common.sh@10 -- # set +x
00:12:36.876   06:22:53	-- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp
00:12:36.876   06:22:53	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:12:36.876   06:22:53	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:36.876   06:22:53	-- common/autotest_common.sh@10 -- # set +x
00:12:36.876  ************************************
00:12:36.876  START TEST nvmf_connect_stress
00:12:36.876  ************************************
00:12:36.876   06:22:53	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp
00:12:36.876  * Looking for test storage...
00:12:36.876  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:12:36.876    06:22:53	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:12:36.876     06:22:53	-- common/autotest_common.sh@1690 -- # lcov --version
00:12:36.876     06:22:53	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:12:36.876    06:22:53	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:12:36.876    06:22:53	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:12:36.876    06:22:53	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:12:36.876    06:22:53	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:12:36.876    06:22:53	-- scripts/common.sh@335 -- # IFS=.-:
00:12:36.876    06:22:53	-- scripts/common.sh@335 -- # read -ra ver1
00:12:36.876    06:22:53	-- scripts/common.sh@336 -- # IFS=.-:
00:12:36.876    06:22:53	-- scripts/common.sh@336 -- # read -ra ver2
00:12:36.876    06:22:53	-- scripts/common.sh@337 -- # local 'op=<'
00:12:36.876    06:22:53	-- scripts/common.sh@339 -- # ver1_l=2
00:12:36.876    06:22:53	-- scripts/common.sh@340 -- # ver2_l=1
00:12:36.876    06:22:53	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:12:36.876    06:22:53	-- scripts/common.sh@343 -- # case "$op" in
00:12:36.876    06:22:53	-- scripts/common.sh@344 -- # : 1
00:12:36.876    06:22:53	-- scripts/common.sh@363 -- # (( v = 0 ))
00:12:36.876    06:22:53	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:36.876     06:22:53	-- scripts/common.sh@364 -- # decimal 1
00:12:36.876     06:22:53	-- scripts/common.sh@352 -- # local d=1
00:12:36.876     06:22:53	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:36.876     06:22:53	-- scripts/common.sh@354 -- # echo 1
00:12:36.876    06:22:53	-- scripts/common.sh@364 -- # ver1[v]=1
00:12:36.876     06:22:53	-- scripts/common.sh@365 -- # decimal 2
00:12:36.876     06:22:53	-- scripts/common.sh@352 -- # local d=2
00:12:36.876     06:22:53	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:36.876     06:22:53	-- scripts/common.sh@354 -- # echo 2
00:12:36.876    06:22:53	-- scripts/common.sh@365 -- # ver2[v]=2
00:12:36.876    06:22:53	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:12:36.876    06:22:53	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:12:36.876    06:22:53	-- scripts/common.sh@367 -- # return 0
00:12:36.876    06:22:53	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:36.876    06:22:53	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:12:36.876  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:36.876  		--rc genhtml_branch_coverage=1
00:12:36.876  		--rc genhtml_function_coverage=1
00:12:36.876  		--rc genhtml_legend=1
00:12:36.876  		--rc geninfo_all_blocks=1
00:12:36.876  		--rc geninfo_unexecuted_blocks=1
00:12:36.876  		
00:12:36.876  		'
00:12:36.876    06:22:53	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:12:36.876  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:36.876  		--rc genhtml_branch_coverage=1
00:12:36.876  		--rc genhtml_function_coverage=1
00:12:36.876  		--rc genhtml_legend=1
00:12:36.876  		--rc geninfo_all_blocks=1
00:12:36.876  		--rc geninfo_unexecuted_blocks=1
00:12:36.876  		
00:12:36.876  		'
00:12:36.876    06:22:53	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:12:36.876  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:36.876  		--rc genhtml_branch_coverage=1
00:12:36.876  		--rc genhtml_function_coverage=1
00:12:36.876  		--rc genhtml_legend=1
00:12:36.876  		--rc geninfo_all_blocks=1
00:12:36.876  		--rc geninfo_unexecuted_blocks=1
00:12:36.876  		
00:12:36.876  		'
00:12:36.876    06:22:53	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:12:36.876  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:36.876  		--rc genhtml_branch_coverage=1
00:12:36.876  		--rc genhtml_function_coverage=1
00:12:36.876  		--rc genhtml_legend=1
00:12:36.876  		--rc geninfo_all_blocks=1
00:12:36.876  		--rc geninfo_unexecuted_blocks=1
00:12:36.876  		
00:12:36.876  		'
00:12:36.876   06:22:53	-- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:12:36.876     06:22:53	-- nvmf/common.sh@7 -- # uname -s
00:12:36.876    06:22:53	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:12:36.876    06:22:53	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:12:36.876    06:22:53	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:12:36.876    06:22:53	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:12:36.876    06:22:53	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:12:36.876    06:22:53	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:12:36.876    06:22:53	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:12:36.876    06:22:53	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:12:36.876    06:22:53	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:12:36.876     06:22:53	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:12:36.876    06:22:53	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:12:36.876    06:22:53	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:12:36.876    06:22:53	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:12:36.876    06:22:53	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:12:36.876    06:22:53	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:12:36.876    06:22:53	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:12:36.876     06:22:53	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:36.876     06:22:53	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:36.876     06:22:53	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:36.876      06:22:53	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:36.876      06:22:53	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:36.877      06:22:53	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:36.877      06:22:53	-- paths/export.sh@5 -- # export PATH
00:12:36.877      06:22:53	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:36.877    06:22:53	-- nvmf/common.sh@46 -- # : 0
00:12:36.877    06:22:53	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:12:36.877    06:22:53	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:12:36.877    06:22:53	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:12:36.877    06:22:53	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:12:36.877    06:22:53	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:12:36.877    06:22:53	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:12:36.877    06:22:53	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:12:36.877    06:22:53	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:12:36.877   06:22:53	-- target/connect_stress.sh@12 -- # nvmftestinit
00:12:36.877   06:22:53	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:12:36.877   06:22:53	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:12:36.877   06:22:53	-- nvmf/common.sh@436 -- # prepare_net_devs
00:12:36.877   06:22:53	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:12:36.877   06:22:53	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:12:36.877   06:22:53	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:12:36.877   06:22:53	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:12:36.877    06:22:53	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:12:36.877   06:22:53	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:12:36.877   06:22:53	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:12:36.877   06:22:53	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:12:36.877   06:22:53	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:12:36.877   06:22:53	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:12:36.877   06:22:53	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:12:36.877   06:22:53	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:12:36.877   06:22:53	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:12:36.877   06:22:53	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:12:36.877   06:22:53	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:12:36.877   06:22:53	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:12:36.877   06:22:53	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:12:36.877   06:22:53	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:12:36.877   06:22:53	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:12:36.877   06:22:53	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:12:36.877   06:22:53	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:12:36.877   06:22:53	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:12:36.877   06:22:53	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:12:36.877   06:22:53	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:12:37.136   06:22:53	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:12:37.136  Cannot find device "nvmf_tgt_br"
00:12:37.136   06:22:53	-- nvmf/common.sh@154 -- # true
00:12:37.136   06:22:53	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:12:37.136  Cannot find device "nvmf_tgt_br2"
00:12:37.136   06:22:53	-- nvmf/common.sh@155 -- # true
00:12:37.136   06:22:53	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:12:37.136   06:22:53	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:12:37.136  Cannot find device "nvmf_tgt_br"
00:12:37.136   06:22:53	-- nvmf/common.sh@157 -- # true
00:12:37.136   06:22:53	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:12:37.136  Cannot find device "nvmf_tgt_br2"
00:12:37.136   06:22:53	-- nvmf/common.sh@158 -- # true
00:12:37.136   06:22:53	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:12:37.136   06:22:53	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:12:37.136   06:22:53	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:12:37.136  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:12:37.136   06:22:53	-- nvmf/common.sh@161 -- # true
00:12:37.136   06:22:53	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:12:37.136  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:12:37.136   06:22:53	-- nvmf/common.sh@162 -- # true
00:12:37.136   06:22:53	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:12:37.136   06:22:53	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:12:37.136   06:22:53	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:12:37.136   06:22:53	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:12:37.136   06:22:53	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:12:37.136   06:22:54	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:12:37.136   06:22:54	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:12:37.136   06:22:54	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:12:37.136   06:22:54	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:12:37.136   06:22:54	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:12:37.136   06:22:54	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:12:37.136   06:22:54	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:12:37.136   06:22:54	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:12:37.136   06:22:54	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:12:37.136   06:22:54	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:12:37.136   06:22:54	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:12:37.136   06:22:54	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:12:37.136   06:22:54	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:12:37.136   06:22:54	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:12:37.136   06:22:54	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:12:37.395   06:22:54	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:12:37.395   06:22:54	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:12:37.395   06:22:54	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:12:37.395   06:22:54	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:12:37.395  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:12:37.395  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms
00:12:37.395  
00:12:37.395  --- 10.0.0.2 ping statistics ---
00:12:37.395  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:37.395  rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms
00:12:37.395   06:22:54	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:12:37.395  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:12:37.395  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms
00:12:37.395  
00:12:37.395  --- 10.0.0.3 ping statistics ---
00:12:37.395  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:37.395  rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms
00:12:37.395   06:22:54	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:12:37.395  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:12:37.395  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms
00:12:37.395  
00:12:37.395  --- 10.0.0.1 ping statistics ---
00:12:37.395  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:37.395  rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms
00:12:37.395   06:22:54	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:12:37.395   06:22:54	-- nvmf/common.sh@421 -- # return 0
00:12:37.395   06:22:54	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:12:37.395   06:22:54	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:12:37.395   06:22:54	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:12:37.395   06:22:54	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:12:37.395   06:22:54	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:12:37.395   06:22:54	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:12:37.395   06:22:54	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:12:37.395   06:22:54	-- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE
00:12:37.395   06:22:54	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:12:37.395   06:22:54	-- common/autotest_common.sh@722 -- # xtrace_disable
00:12:37.395   06:22:54	-- common/autotest_common.sh@10 -- # set +x
00:12:37.395   06:22:54	-- nvmf/common.sh@469 -- # nvmfpid=69763
00:12:37.395   06:22:54	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:12:37.395   06:22:54	-- nvmf/common.sh@470 -- # waitforlisten 69763
00:12:37.395   06:22:54	-- common/autotest_common.sh@829 -- # '[' -z 69763 ']'
00:12:37.395   06:22:54	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:37.395   06:22:54	-- common/autotest_common.sh@834 -- # local max_retries=100
00:12:37.395  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:37.395   06:22:54	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:37.395   06:22:54	-- common/autotest_common.sh@838 -- # xtrace_disable
00:12:37.395   06:22:54	-- common/autotest_common.sh@10 -- # set +x
00:12:37.395  [2024-12-16 06:22:54.232522] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:12:37.395  [2024-12-16 06:22:54.232613] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:12:37.653  [2024-12-16 06:22:54.371655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:12:37.653  [2024-12-16 06:22:54.531851] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:12:37.653  [2024-12-16 06:22:54.532114] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:12:37.653  [2024-12-16 06:22:54.532146] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:12:37.653  [2024-12-16 06:22:54.532168] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:12:37.653  [2024-12-16 06:22:54.532360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:12:37.653  [2024-12-16 06:22:54.533373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:12:37.653  [2024-12-16 06:22:54.533411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:12:38.588   06:22:55	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:12:38.588   06:22:55	-- common/autotest_common.sh@862 -- # return 0
00:12:38.588   06:22:55	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:12:38.588   06:22:55	-- common/autotest_common.sh@728 -- # xtrace_disable
00:12:38.588   06:22:55	-- common/autotest_common.sh@10 -- # set +x
00:12:38.588   06:22:55	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:12:38.588   06:22:55	-- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:12:38.588   06:22:55	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:38.588   06:22:55	-- common/autotest_common.sh@10 -- # set +x
00:12:38.588  [2024-12-16 06:22:55.298182] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:12:38.588   06:22:55	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:38.588   06:22:55	-- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:12:38.588   06:22:55	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:38.588   06:22:55	-- common/autotest_common.sh@10 -- # set +x
00:12:38.588   06:22:55	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:38.588   06:22:55	-- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:12:38.588   06:22:55	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:38.588   06:22:55	-- common/autotest_common.sh@10 -- # set +x
00:12:38.588  [2024-12-16 06:22:55.318322] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:12:38.588   06:22:55	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:38.588   06:22:55	-- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512
00:12:38.588   06:22:55	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:38.588   06:22:55	-- common/autotest_common.sh@10 -- # set +x
00:12:38.588  NULL1
00:12:38.588   06:22:55	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:38.588   06:22:55	-- target/connect_stress.sh@21 -- # PERF_PID=69821
00:12:38.589   06:22:55	-- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10
00:12:38.589   06:22:55	-- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt
00:12:38.589   06:22:55	-- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt
00:12:38.589    06:22:55	-- target/connect_stress.sh@27 -- # seq 1 20
00:12:38.589   06:22:55	-- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:12:38.589   06:22:55	-- target/connect_stress.sh@28 -- # cat
00:12:38.589   06:22:55	-- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:12:38.589   06:22:55	-- target/connect_stress.sh@28 -- # cat
00:12:38.589   06:22:55	-- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:12:38.589   06:22:55	-- target/connect_stress.sh@28 -- # cat
00:12:38.589   06:22:55	-- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:12:38.589   06:22:55	-- target/connect_stress.sh@28 -- # cat
00:12:38.589   06:22:55	-- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:12:38.589   06:22:55	-- target/connect_stress.sh@28 -- # cat
00:12:38.589   06:22:55	-- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:12:38.589   06:22:55	-- target/connect_stress.sh@28 -- # cat
00:12:38.589   06:22:55	-- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:12:38.589   06:22:55	-- target/connect_stress.sh@28 -- # cat
00:12:38.589   06:22:55	-- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:12:38.589   06:22:55	-- target/connect_stress.sh@28 -- # cat
00:12:38.589   06:22:55	-- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:12:38.589   06:22:55	-- target/connect_stress.sh@28 -- # cat
00:12:38.589   06:22:55	-- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:12:38.589   06:22:55	-- target/connect_stress.sh@28 -- # cat
00:12:38.589   06:22:55	-- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:12:38.589   06:22:55	-- target/connect_stress.sh@28 -- # cat
00:12:38.589   06:22:55	-- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:12:38.589   06:22:55	-- target/connect_stress.sh@28 -- # cat
00:12:38.589   06:22:55	-- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:12:38.589   06:22:55	-- target/connect_stress.sh@28 -- # cat
00:12:38.589   06:22:55	-- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:12:38.589   06:22:55	-- target/connect_stress.sh@28 -- # cat
00:12:38.589   06:22:55	-- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:12:38.589   06:22:55	-- target/connect_stress.sh@28 -- # cat
00:12:38.589   06:22:55	-- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:12:38.589   06:22:55	-- target/connect_stress.sh@28 -- # cat
00:12:38.589   06:22:55	-- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:12:38.589   06:22:55	-- target/connect_stress.sh@28 -- # cat
00:12:38.589   06:22:55	-- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:12:38.589   06:22:55	-- target/connect_stress.sh@28 -- # cat
00:12:38.589   06:22:55	-- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:12:38.589   06:22:55	-- target/connect_stress.sh@28 -- # cat
00:12:38.589   06:22:55	-- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:12:38.589   06:22:55	-- target/connect_stress.sh@28 -- # cat
00:12:38.589   06:22:55	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:38.589   06:22:55	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:38.589   06:22:55	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:38.589   06:22:55	-- common/autotest_common.sh@10 -- # set +x
00:12:38.847   06:22:55	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:38.847   06:22:55	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:38.847   06:22:55	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:38.847   06:22:55	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:38.847   06:22:55	-- common/autotest_common.sh@10 -- # set +x
00:12:39.106   06:22:56	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:39.106   06:22:56	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:39.106   06:22:56	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:39.106   06:22:56	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:39.106   06:22:56	-- common/autotest_common.sh@10 -- # set +x
00:12:39.673   06:22:56	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:39.673   06:22:56	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:39.673   06:22:56	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:39.673   06:22:56	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:39.673   06:22:56	-- common/autotest_common.sh@10 -- # set +x
00:12:39.934   06:22:56	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:39.934   06:22:56	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:39.934   06:22:56	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:39.934   06:22:56	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:39.934   06:22:56	-- common/autotest_common.sh@10 -- # set +x
00:12:40.234   06:22:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:40.234   06:22:57	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:40.234   06:22:57	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:40.234   06:22:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:40.234   06:22:57	-- common/autotest_common.sh@10 -- # set +x
00:12:40.492   06:22:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:40.492   06:22:57	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:40.492   06:22:57	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:40.492   06:22:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:40.492   06:22:57	-- common/autotest_common.sh@10 -- # set +x
00:12:40.751   06:22:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:40.752   06:22:57	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:40.752   06:22:57	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:40.752   06:22:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:40.752   06:22:57	-- common/autotest_common.sh@10 -- # set +x
00:12:41.319   06:22:58	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:41.319   06:22:58	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:41.319   06:22:58	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:41.319   06:22:58	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:41.319   06:22:58	-- common/autotest_common.sh@10 -- # set +x
00:12:41.578   06:22:58	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:41.578   06:22:58	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:41.578   06:22:58	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:41.578   06:22:58	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:41.578   06:22:58	-- common/autotest_common.sh@10 -- # set +x
00:12:41.837   06:22:58	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:41.837   06:22:58	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:41.837   06:22:58	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:41.837   06:22:58	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:41.837   06:22:58	-- common/autotest_common.sh@10 -- # set +x
00:12:42.096   06:22:58	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:42.096   06:22:58	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:42.096   06:22:58	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:42.096   06:22:58	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:42.096   06:22:58	-- common/autotest_common.sh@10 -- # set +x
00:12:42.354   06:22:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:42.354   06:22:59	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:42.354   06:22:59	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:42.354   06:22:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:42.354   06:22:59	-- common/autotest_common.sh@10 -- # set +x
00:12:42.919   06:22:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:42.919   06:22:59	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:42.919   06:22:59	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:42.919   06:22:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:42.919   06:22:59	-- common/autotest_common.sh@10 -- # set +x
00:12:43.177   06:22:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:43.178   06:22:59	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:43.178   06:22:59	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:43.178   06:22:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:43.178   06:22:59	-- common/autotest_common.sh@10 -- # set +x
00:12:43.436   06:23:00	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:43.436   06:23:00	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:43.436   06:23:00	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:43.436   06:23:00	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:43.436   06:23:00	-- common/autotest_common.sh@10 -- # set +x
00:12:43.695   06:23:00	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:43.695   06:23:00	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:43.695   06:23:00	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:43.695   06:23:00	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:43.695   06:23:00	-- common/autotest_common.sh@10 -- # set +x
00:12:43.954   06:23:00	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:43.954   06:23:00	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:43.954   06:23:00	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:43.954   06:23:00	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:43.954   06:23:00	-- common/autotest_common.sh@10 -- # set +x
00:12:44.522   06:23:01	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:44.522   06:23:01	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:44.522   06:23:01	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:44.522   06:23:01	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:44.522   06:23:01	-- common/autotest_common.sh@10 -- # set +x
00:12:44.781   06:23:01	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:44.781   06:23:01	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:44.781   06:23:01	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:44.781   06:23:01	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:44.781   06:23:01	-- common/autotest_common.sh@10 -- # set +x
00:12:45.039   06:23:01	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:45.039   06:23:01	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:45.039   06:23:01	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:45.039   06:23:01	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:45.039   06:23:01	-- common/autotest_common.sh@10 -- # set +x
00:12:45.298   06:23:02	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:45.298   06:23:02	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:45.298   06:23:02	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:45.298   06:23:02	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:45.298   06:23:02	-- common/autotest_common.sh@10 -- # set +x
00:12:45.866   06:23:02	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:45.866   06:23:02	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:45.866   06:23:02	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:45.866   06:23:02	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:45.866   06:23:02	-- common/autotest_common.sh@10 -- # set +x
00:12:46.125   06:23:02	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:46.125   06:23:02	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:46.125   06:23:02	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:46.125   06:23:02	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:46.125   06:23:02	-- common/autotest_common.sh@10 -- # set +x
00:12:46.384   06:23:03	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:46.384   06:23:03	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:46.384   06:23:03	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:46.384   06:23:03	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:46.384   06:23:03	-- common/autotest_common.sh@10 -- # set +x
00:12:46.642   06:23:03	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:46.642   06:23:03	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:46.642   06:23:03	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:46.642   06:23:03	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:46.642   06:23:03	-- common/autotest_common.sh@10 -- # set +x
00:12:46.901   06:23:03	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:46.902   06:23:03	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:46.902   06:23:03	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:46.902   06:23:03	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:46.902   06:23:03	-- common/autotest_common.sh@10 -- # set +x
00:12:47.469   06:23:04	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:47.469   06:23:04	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:47.469   06:23:04	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:47.469   06:23:04	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:47.469   06:23:04	-- common/autotest_common.sh@10 -- # set +x
00:12:47.728   06:23:04	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:47.728   06:23:04	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:47.728   06:23:04	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:47.728   06:23:04	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:47.728   06:23:04	-- common/autotest_common.sh@10 -- # set +x
00:12:47.987   06:23:04	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:47.987   06:23:04	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:47.987   06:23:04	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:47.987   06:23:04	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:47.987   06:23:04	-- common/autotest_common.sh@10 -- # set +x
00:12:48.249   06:23:05	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:48.249   06:23:05	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:48.249   06:23:05	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:48.249   06:23:05	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:48.249   06:23:05	-- common/autotest_common.sh@10 -- # set +x
00:12:48.510   06:23:05	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:48.510   06:23:05	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:48.510   06:23:05	-- target/connect_stress.sh@35 -- # rpc_cmd
00:12:48.510   06:23:05	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:48.510   06:23:05	-- common/autotest_common.sh@10 -- # set +x
00:12:48.769  Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:12:49.027   06:23:05	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:49.027   06:23:05	-- target/connect_stress.sh@34 -- # kill -0 69821
00:12:49.027  /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (69821) - No such process
00:12:49.027   06:23:05	-- target/connect_stress.sh@38 -- # wait 69821
00:12:49.027   06:23:05	-- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt
00:12:49.027   06:23:05	-- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT
00:12:49.027   06:23:05	-- target/connect_stress.sh@43 -- # nvmftestfini
00:12:49.027   06:23:05	-- nvmf/common.sh@476 -- # nvmfcleanup
00:12:49.027   06:23:05	-- nvmf/common.sh@116 -- # sync
00:12:49.027   06:23:05	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:12:49.027   06:23:05	-- nvmf/common.sh@119 -- # set +e
00:12:49.027   06:23:05	-- nvmf/common.sh@120 -- # for i in {1..20}
00:12:49.027   06:23:05	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:12:49.027  rmmod nvme_tcp
00:12:49.027  rmmod nvme_fabrics
00:12:49.027  rmmod nvme_keyring
00:12:49.027   06:23:05	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:12:49.027   06:23:05	-- nvmf/common.sh@123 -- # set -e
00:12:49.027   06:23:05	-- nvmf/common.sh@124 -- # return 0
00:12:49.027   06:23:05	-- nvmf/common.sh@477 -- # '[' -n 69763 ']'
00:12:49.027   06:23:05	-- nvmf/common.sh@478 -- # killprocess 69763
00:12:49.027   06:23:05	-- common/autotest_common.sh@936 -- # '[' -z 69763 ']'
00:12:49.027   06:23:05	-- common/autotest_common.sh@940 -- # kill -0 69763
00:12:49.027    06:23:05	-- common/autotest_common.sh@941 -- # uname
00:12:49.027   06:23:05	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:12:49.027    06:23:05	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69763
00:12:49.027  killing process with pid 69763
00:12:49.027   06:23:05	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:12:49.027   06:23:05	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:12:49.027   06:23:05	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 69763'
00:12:49.027   06:23:05	-- common/autotest_common.sh@955 -- # kill 69763
00:12:49.027   06:23:05	-- common/autotest_common.sh@960 -- # wait 69763
00:12:49.286   06:23:06	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:12:49.286   06:23:06	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:12:49.286   06:23:06	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:12:49.286   06:23:06	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:12:49.286   06:23:06	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:12:49.286   06:23:06	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:12:49.286   06:23:06	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:12:49.286    06:23:06	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:12:49.545   06:23:06	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:12:49.545  
00:12:49.545  real	0m12.627s
00:12:49.545  user	0m41.933s
00:12:49.545  sys	0m3.001s
00:12:49.545   06:23:06	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:49.545  ************************************
00:12:49.545  END TEST nvmf_connect_stress
00:12:49.545  ************************************
00:12:49.545   06:23:06	-- common/autotest_common.sh@10 -- # set +x
00:12:49.545   06:23:06	-- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp
00:12:49.545   06:23:06	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:12:49.545   06:23:06	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:49.545   06:23:06	-- common/autotest_common.sh@10 -- # set +x
00:12:49.545  ************************************
00:12:49.545  START TEST nvmf_fused_ordering
00:12:49.545  ************************************
00:12:49.545   06:23:06	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp
00:12:49.545  * Looking for test storage...
00:12:49.545  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:12:49.545    06:23:06	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:12:49.545     06:23:06	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:12:49.545     06:23:06	-- common/autotest_common.sh@1690 -- # lcov --version
00:12:49.545    06:23:06	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:12:49.545    06:23:06	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:12:49.545    06:23:06	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:12:49.545    06:23:06	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:12:49.545    06:23:06	-- scripts/common.sh@335 -- # IFS=.-:
00:12:49.545    06:23:06	-- scripts/common.sh@335 -- # read -ra ver1
00:12:49.545    06:23:06	-- scripts/common.sh@336 -- # IFS=.-:
00:12:49.545    06:23:06	-- scripts/common.sh@336 -- # read -ra ver2
00:12:49.545    06:23:06	-- scripts/common.sh@337 -- # local 'op=<'
00:12:49.545    06:23:06	-- scripts/common.sh@339 -- # ver1_l=2
00:12:49.545    06:23:06	-- scripts/common.sh@340 -- # ver2_l=1
00:12:49.545    06:23:06	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:12:49.545    06:23:06	-- scripts/common.sh@343 -- # case "$op" in
00:12:49.545    06:23:06	-- scripts/common.sh@344 -- # : 1
00:12:49.545    06:23:06	-- scripts/common.sh@363 -- # (( v = 0 ))
00:12:49.545    06:23:06	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:49.545     06:23:06	-- scripts/common.sh@364 -- # decimal 1
00:12:49.545     06:23:06	-- scripts/common.sh@352 -- # local d=1
00:12:49.545     06:23:06	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:49.545     06:23:06	-- scripts/common.sh@354 -- # echo 1
00:12:49.545    06:23:06	-- scripts/common.sh@364 -- # ver1[v]=1
00:12:49.545     06:23:06	-- scripts/common.sh@365 -- # decimal 2
00:12:49.545     06:23:06	-- scripts/common.sh@352 -- # local d=2
00:12:49.545     06:23:06	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:49.545     06:23:06	-- scripts/common.sh@354 -- # echo 2
00:12:49.545    06:23:06	-- scripts/common.sh@365 -- # ver2[v]=2
00:12:49.545    06:23:06	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:12:49.545    06:23:06	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:12:49.545    06:23:06	-- scripts/common.sh@367 -- # return 0
00:12:49.545    06:23:06	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:49.545    06:23:06	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:12:49.545  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:49.545  		--rc genhtml_branch_coverage=1
00:12:49.545  		--rc genhtml_function_coverage=1
00:12:49.545  		--rc genhtml_legend=1
00:12:49.545  		--rc geninfo_all_blocks=1
00:12:49.545  		--rc geninfo_unexecuted_blocks=1
00:12:49.545  		
00:12:49.545  		'
00:12:49.545    06:23:06	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:12:49.545  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:49.545  		--rc genhtml_branch_coverage=1
00:12:49.545  		--rc genhtml_function_coverage=1
00:12:49.545  		--rc genhtml_legend=1
00:12:49.545  		--rc geninfo_all_blocks=1
00:12:49.545  		--rc geninfo_unexecuted_blocks=1
00:12:49.545  		
00:12:49.545  		'
00:12:49.545    06:23:06	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:12:49.545  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:49.545  		--rc genhtml_branch_coverage=1
00:12:49.545  		--rc genhtml_function_coverage=1
00:12:49.545  		--rc genhtml_legend=1
00:12:49.545  		--rc geninfo_all_blocks=1
00:12:49.545  		--rc geninfo_unexecuted_blocks=1
00:12:49.545  		
00:12:49.545  		'
00:12:49.545    06:23:06	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:12:49.545  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:49.545  		--rc genhtml_branch_coverage=1
00:12:49.545  		--rc genhtml_function_coverage=1
00:12:49.545  		--rc genhtml_legend=1
00:12:49.545  		--rc geninfo_all_blocks=1
00:12:49.545  		--rc geninfo_unexecuted_blocks=1
00:12:49.545  		
00:12:49.546  		'
00:12:49.546   06:23:06	-- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:12:49.546     06:23:06	-- nvmf/common.sh@7 -- # uname -s
00:12:49.805    06:23:06	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:12:49.805    06:23:06	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:12:49.805    06:23:06	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:12:49.805    06:23:06	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:12:49.805    06:23:06	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:12:49.805    06:23:06	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:12:49.805    06:23:06	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:12:49.805    06:23:06	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:12:49.805    06:23:06	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:12:49.805     06:23:06	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:12:49.805    06:23:06	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:12:49.805    06:23:06	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:12:49.805    06:23:06	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:12:49.805    06:23:06	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:12:49.805    06:23:06	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:12:49.805    06:23:06	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:12:49.805     06:23:06	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:49.805     06:23:06	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:49.805     06:23:06	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:49.805      06:23:06	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:49.805      06:23:06	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:49.805      06:23:06	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:49.805      06:23:06	-- paths/export.sh@5 -- # export PATH
00:12:49.805      06:23:06	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:49.805    06:23:06	-- nvmf/common.sh@46 -- # : 0
00:12:49.805    06:23:06	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:12:49.805    06:23:06	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:12:49.805    06:23:06	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:12:49.805    06:23:06	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:12:49.805    06:23:06	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:12:49.805    06:23:06	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:12:49.805    06:23:06	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:12:49.805    06:23:06	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:12:49.805   06:23:06	-- target/fused_ordering.sh@12 -- # nvmftestinit
00:12:49.805   06:23:06	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:12:49.805   06:23:06	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:12:49.805   06:23:06	-- nvmf/common.sh@436 -- # prepare_net_devs
00:12:49.805   06:23:06	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:12:49.805   06:23:06	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:12:49.805   06:23:06	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:12:49.805   06:23:06	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:12:49.805    06:23:06	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:12:49.805   06:23:06	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:12:49.805   06:23:06	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:12:49.805   06:23:06	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:12:49.805   06:23:06	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:12:49.805   06:23:06	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:12:49.805   06:23:06	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:12:49.805   06:23:06	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:12:49.805   06:23:06	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:12:49.805   06:23:06	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:12:49.805   06:23:06	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:12:49.805   06:23:06	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:12:49.805   06:23:06	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:12:49.805   06:23:06	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:12:49.805   06:23:06	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:12:49.805   06:23:06	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:12:49.805   06:23:06	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:12:49.805   06:23:06	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:12:49.805   06:23:06	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:12:49.805   06:23:06	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:12:49.805   06:23:06	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:12:49.805  Cannot find device "nvmf_tgt_br"
00:12:49.805   06:23:06	-- nvmf/common.sh@154 -- # true
00:12:49.805   06:23:06	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:12:49.805  Cannot find device "nvmf_tgt_br2"
00:12:49.805   06:23:06	-- nvmf/common.sh@155 -- # true
00:12:49.805   06:23:06	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:12:49.805   06:23:06	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:12:49.805  Cannot find device "nvmf_tgt_br"
00:12:49.805   06:23:06	-- nvmf/common.sh@157 -- # true
00:12:49.805   06:23:06	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:12:49.805  Cannot find device "nvmf_tgt_br2"
00:12:49.805   06:23:06	-- nvmf/common.sh@158 -- # true
00:12:49.805   06:23:06	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:12:49.805   06:23:06	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:12:49.805   06:23:06	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:12:49.805  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:12:49.805   06:23:06	-- nvmf/common.sh@161 -- # true
00:12:49.805   06:23:06	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:12:49.805  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:12:49.805   06:23:06	-- nvmf/common.sh@162 -- # true
00:12:49.805   06:23:06	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:12:49.805   06:23:06	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:12:49.805   06:23:06	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:12:49.805   06:23:06	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:12:49.805   06:23:06	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:12:49.805   06:23:06	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:12:49.805   06:23:06	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:12:49.805   06:23:06	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:12:50.064   06:23:06	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:12:50.064   06:23:06	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:12:50.064   06:23:06	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:12:50.064   06:23:06	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:12:50.064   06:23:06	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:12:50.064   06:23:06	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:12:50.064   06:23:06	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:12:50.064   06:23:06	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:12:50.064   06:23:06	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:12:50.064   06:23:06	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:12:50.064   06:23:06	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:12:50.064   06:23:06	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:12:50.064   06:23:06	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:12:50.064   06:23:06	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:12:50.064   06:23:06	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:12:50.064   06:23:06	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:12:50.064  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:12:50.064  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms
00:12:50.064  
00:12:50.064  --- 10.0.0.2 ping statistics ---
00:12:50.064  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:50.064  rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms
00:12:50.064   06:23:06	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:12:50.064  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:12:50.064  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms
00:12:50.064  
00:12:50.064  --- 10.0.0.3 ping statistics ---
00:12:50.064  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:50.064  rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms
00:12:50.064   06:23:06	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:12:50.064  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:12:50.064  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms
00:12:50.064  
00:12:50.064  --- 10.0.0.1 ping statistics ---
00:12:50.064  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:50.064  rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms
00:12:50.064   06:23:06	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:12:50.064   06:23:06	-- nvmf/common.sh@421 -- # return 0
00:12:50.064   06:23:06	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:12:50.064   06:23:06	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:12:50.064   06:23:06	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:12:50.064   06:23:06	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:12:50.064   06:23:06	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:12:50.064   06:23:06	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:12:50.064   06:23:06	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:12:50.064   06:23:06	-- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2
00:12:50.064   06:23:06	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:12:50.064   06:23:06	-- common/autotest_common.sh@722 -- # xtrace_disable
00:12:50.064   06:23:06	-- common/autotest_common.sh@10 -- # set +x
00:12:50.064   06:23:06	-- nvmf/common.sh@469 -- # nvmfpid=70157
00:12:50.064   06:23:06	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:12:50.064   06:23:06	-- nvmf/common.sh@470 -- # waitforlisten 70157
00:12:50.064   06:23:06	-- common/autotest_common.sh@829 -- # '[' -z 70157 ']'
00:12:50.064  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:50.064   06:23:06	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:50.064   06:23:06	-- common/autotest_common.sh@834 -- # local max_retries=100
00:12:50.064   06:23:06	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:50.064   06:23:06	-- common/autotest_common.sh@838 -- # xtrace_disable
00:12:50.064   06:23:06	-- common/autotest_common.sh@10 -- # set +x
00:12:50.064  [2024-12-16 06:23:06.992392] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:12:50.064  [2024-12-16 06:23:06.992682] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:12:50.323  [2024-12-16 06:23:07.128280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:50.323  [2024-12-16 06:23:07.253277] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:12:50.323  [2024-12-16 06:23:07.253423] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:12:50.323  [2024-12-16 06:23:07.253436] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:12:50.323  [2024-12-16 06:23:07.253444] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:12:50.323  [2024-12-16 06:23:07.253478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:12:51.258   06:23:07	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:12:51.258   06:23:07	-- common/autotest_common.sh@862 -- # return 0
00:12:51.258   06:23:07	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:12:51.258   06:23:07	-- common/autotest_common.sh@728 -- # xtrace_disable
00:12:51.258   06:23:07	-- common/autotest_common.sh@10 -- # set +x
00:12:51.258   06:23:08	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:12:51.258   06:23:08	-- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:12:51.258   06:23:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:51.258   06:23:08	-- common/autotest_common.sh@10 -- # set +x
00:12:51.258  [2024-12-16 06:23:08.043627] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:12:51.258   06:23:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:51.258   06:23:08	-- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:12:51.258   06:23:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:51.258   06:23:08	-- common/autotest_common.sh@10 -- # set +x
00:12:51.258   06:23:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:51.258   06:23:08	-- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:12:51.258   06:23:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:51.258   06:23:08	-- common/autotest_common.sh@10 -- # set +x
00:12:51.258  [2024-12-16 06:23:08.059786] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:12:51.258   06:23:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:51.258   06:23:08	-- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512
00:12:51.258   06:23:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:51.259   06:23:08	-- common/autotest_common.sh@10 -- # set +x
00:12:51.259  NULL1
00:12:51.259   06:23:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:51.259   06:23:08	-- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine
00:12:51.259   06:23:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:51.259   06:23:08	-- common/autotest_common.sh@10 -- # set +x
00:12:51.259   06:23:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:51.259   06:23:08	-- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1
00:12:51.259   06:23:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:51.259   06:23:08	-- common/autotest_common.sh@10 -- # set +x
00:12:51.259   06:23:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:51.259   06:23:08	-- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1'
00:12:51.259  [2024-12-16 06:23:08.109516] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:12:51.259  [2024-12-16 06:23:08.109546] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70207 ]
00:12:51.826  Attached to nqn.2016-06.io.spdk:cnode1
00:12:51.826    Namespace ID: 1 size: 1GB
00:12:51.827  fused_ordering(0)
00:12:51.827  fused_ordering(1)
00:12:51.827  fused_ordering(2)
00:12:51.827  fused_ordering(3)
00:12:51.827  fused_ordering(4)
00:12:51.827  fused_ordering(5)
00:12:51.827  fused_ordering(6)
00:12:51.827  fused_ordering(7)
00:12:51.827  fused_ordering(8)
00:12:51.827  fused_ordering(9)
00:12:51.827  fused_ordering(10)
00:12:51.827  fused_ordering(11)
00:12:51.827  fused_ordering(12)
00:12:51.827  fused_ordering(13)
00:12:51.827  fused_ordering(14)
00:12:51.827  fused_ordering(15)
00:12:51.827  fused_ordering(16)
00:12:51.827  fused_ordering(17)
00:12:51.827  fused_ordering(18)
00:12:51.827  fused_ordering(19)
00:12:51.827  fused_ordering(20)
00:12:51.827  fused_ordering(21)
00:12:51.827  fused_ordering(22)
00:12:51.827  fused_ordering(23)
00:12:51.827  fused_ordering(24)
00:12:51.827  fused_ordering(25)
00:12:51.827  fused_ordering(26)
00:12:51.827  fused_ordering(27)
00:12:51.827  fused_ordering(28)
00:12:51.827  fused_ordering(29)
00:12:51.827  fused_ordering(30)
00:12:51.827  fused_ordering(31)
00:12:51.827  fused_ordering(32)
00:12:51.827  fused_ordering(33)
00:12:51.827  fused_ordering(34)
00:12:51.827  fused_ordering(35)
00:12:51.827  fused_ordering(36)
00:12:51.827  fused_ordering(37)
00:12:51.827  fused_ordering(38)
00:12:51.827  fused_ordering(39)
00:12:51.827  fused_ordering(40)
00:12:51.827  fused_ordering(41)
00:12:51.827  fused_ordering(42)
00:12:51.827  fused_ordering(43)
00:12:51.827  fused_ordering(44)
00:12:51.827  fused_ordering(45)
00:12:51.827  fused_ordering(46)
00:12:51.827  fused_ordering(47)
00:12:51.827  fused_ordering(48)
00:12:51.827  fused_ordering(49)
00:12:51.827  fused_ordering(50)
00:12:51.827  fused_ordering(51)
00:12:51.827  fused_ordering(52)
00:12:51.827  fused_ordering(53)
00:12:51.827  fused_ordering(54)
00:12:51.827  fused_ordering(55)
00:12:51.827  fused_ordering(56)
00:12:51.827  fused_ordering(57)
00:12:51.827  fused_ordering(58)
00:12:51.827  fused_ordering(59)
00:12:51.827  fused_ordering(60)
00:12:51.827  fused_ordering(61)
00:12:51.827  fused_ordering(62)
00:12:51.827  fused_ordering(63)
00:12:51.827  fused_ordering(64)
00:12:51.827  fused_ordering(65)
00:12:51.827  fused_ordering(66)
00:12:51.827  fused_ordering(67)
00:12:51.827  fused_ordering(68)
00:12:51.827  fused_ordering(69)
00:12:51.827  fused_ordering(70)
00:12:51.827  fused_ordering(71)
00:12:51.827  fused_ordering(72)
00:12:51.827  fused_ordering(73)
00:12:51.827  fused_ordering(74)
00:12:51.827  fused_ordering(75)
00:12:51.827  fused_ordering(76)
00:12:51.827  fused_ordering(77)
00:12:51.827  fused_ordering(78)
00:12:51.827  fused_ordering(79)
00:12:51.827  fused_ordering(80)
00:12:51.827  fused_ordering(81)
00:12:51.827  fused_ordering(82)
00:12:51.827  fused_ordering(83)
00:12:51.827  fused_ordering(84)
00:12:51.827  fused_ordering(85)
00:12:51.827  fused_ordering(86)
00:12:51.827  fused_ordering(87)
00:12:51.827  fused_ordering(88)
00:12:51.827  fused_ordering(89)
00:12:51.827  fused_ordering(90)
00:12:51.827  fused_ordering(91)
00:12:51.827  fused_ordering(92)
00:12:51.827  fused_ordering(93)
00:12:51.827  fused_ordering(94)
00:12:51.827  fused_ordering(95)
00:12:51.827  fused_ordering(96)
00:12:51.827  fused_ordering(97)
00:12:51.827  fused_ordering(98)
00:12:51.827  fused_ordering(99)
00:12:51.827  fused_ordering(100)
00:12:51.827  fused_ordering(101)
00:12:51.827  fused_ordering(102)
00:12:51.827  fused_ordering(103)
00:12:51.827  fused_ordering(104)
00:12:51.827  fused_ordering(105)
00:12:51.827  fused_ordering(106)
00:12:51.827  fused_ordering(107)
00:12:51.827  fused_ordering(108)
00:12:51.827  fused_ordering(109)
00:12:51.827  fused_ordering(110)
00:12:51.827  fused_ordering(111)
00:12:51.827  fused_ordering(112)
00:12:51.827  fused_ordering(113)
00:12:51.827  fused_ordering(114)
00:12:51.827  fused_ordering(115)
00:12:51.827  fused_ordering(116)
00:12:51.827  fused_ordering(117)
00:12:51.827  fused_ordering(118)
00:12:51.827  fused_ordering(119)
00:12:51.827  fused_ordering(120)
00:12:51.827  fused_ordering(121)
00:12:51.827  fused_ordering(122)
00:12:51.827  fused_ordering(123)
00:12:51.827  fused_ordering(124)
00:12:51.827  fused_ordering(125)
00:12:51.827  fused_ordering(126)
00:12:51.827  fused_ordering(127)
00:12:51.827  fused_ordering(128)
00:12:51.827  fused_ordering(129)
00:12:51.827  fused_ordering(130)
00:12:51.827  fused_ordering(131)
00:12:51.827  fused_ordering(132)
00:12:51.827  fused_ordering(133)
00:12:51.827  fused_ordering(134)
00:12:51.827  fused_ordering(135)
00:12:51.827  fused_ordering(136)
00:12:51.827  fused_ordering(137)
00:12:51.827  fused_ordering(138)
00:12:51.827  fused_ordering(139)
00:12:51.827  fused_ordering(140)
00:12:51.827  fused_ordering(141)
00:12:51.827  fused_ordering(142)
00:12:51.827  fused_ordering(143)
00:12:51.827  fused_ordering(144)
00:12:51.827  fused_ordering(145)
00:12:51.827  fused_ordering(146)
00:12:51.827  fused_ordering(147)
00:12:51.827  fused_ordering(148)
00:12:51.827  fused_ordering(149)
00:12:51.827  fused_ordering(150)
00:12:51.827  fused_ordering(151)
00:12:51.827  fused_ordering(152)
00:12:51.827  fused_ordering(153)
00:12:51.827  fused_ordering(154)
00:12:51.827  fused_ordering(155)
00:12:51.827  fused_ordering(156)
00:12:51.827  fused_ordering(157)
00:12:51.827  fused_ordering(158)
00:12:51.827  fused_ordering(159)
00:12:51.827  fused_ordering(160)
00:12:51.827  fused_ordering(161)
00:12:51.827  fused_ordering(162)
00:12:51.827  fused_ordering(163)
00:12:51.827  fused_ordering(164)
00:12:51.827  fused_ordering(165)
00:12:51.827  fused_ordering(166)
00:12:51.827  fused_ordering(167)
00:12:51.827  fused_ordering(168)
00:12:51.827  fused_ordering(169)
00:12:51.827  fused_ordering(170)
00:12:51.827  fused_ordering(171)
00:12:51.827  fused_ordering(172)
00:12:51.827  fused_ordering(173)
00:12:51.827  fused_ordering(174)
00:12:51.827  fused_ordering(175)
00:12:51.827  fused_ordering(176)
00:12:51.827  fused_ordering(177)
00:12:51.827  fused_ordering(178)
00:12:51.827  fused_ordering(179)
00:12:51.827  fused_ordering(180)
00:12:51.827  fused_ordering(181)
00:12:51.827  fused_ordering(182)
00:12:51.827  fused_ordering(183)
00:12:51.827  fused_ordering(184)
00:12:51.827  fused_ordering(185)
00:12:51.827  fused_ordering(186)
00:12:51.827  fused_ordering(187)
00:12:51.827  fused_ordering(188)
00:12:51.827  fused_ordering(189)
00:12:51.827  fused_ordering(190)
00:12:51.827  fused_ordering(191)
00:12:51.827  fused_ordering(192)
00:12:51.827  fused_ordering(193)
00:12:51.827  fused_ordering(194)
00:12:51.827  fused_ordering(195)
00:12:51.827  fused_ordering(196)
00:12:51.827  fused_ordering(197)
00:12:51.827  fused_ordering(198)
00:12:51.827  fused_ordering(199)
00:12:51.827  fused_ordering(200)
00:12:51.827  fused_ordering(201)
00:12:51.827  fused_ordering(202)
00:12:51.827  fused_ordering(203)
00:12:51.827  fused_ordering(204)
00:12:51.827  fused_ordering(205)
00:12:51.827  fused_ordering(206)
00:12:51.827  fused_ordering(207)
00:12:51.827  fused_ordering(208)
00:12:51.827  fused_ordering(209)
00:12:51.827  fused_ordering(210)
00:12:51.827  fused_ordering(211)
00:12:51.827  fused_ordering(212)
00:12:51.827  fused_ordering(213)
00:12:51.827  fused_ordering(214)
00:12:51.827  fused_ordering(215)
00:12:51.827  fused_ordering(216)
00:12:51.827  fused_ordering(217)
00:12:51.827  fused_ordering(218)
00:12:51.827  fused_ordering(219)
00:12:51.827  fused_ordering(220)
00:12:51.827  fused_ordering(221)
00:12:51.827  fused_ordering(222)
00:12:51.827  fused_ordering(223)
00:12:51.827  fused_ordering(224)
00:12:51.827  fused_ordering(225)
00:12:51.827  fused_ordering(226)
00:12:51.827  fused_ordering(227)
00:12:51.827  fused_ordering(228)
00:12:51.827  fused_ordering(229)
00:12:51.827  fused_ordering(230)
00:12:51.827  fused_ordering(231)
00:12:51.827  fused_ordering(232)
00:12:51.827  fused_ordering(233)
00:12:51.827  fused_ordering(234)
00:12:51.827  fused_ordering(235)
00:12:51.827  fused_ordering(236)
00:12:51.827  fused_ordering(237)
00:12:51.827  fused_ordering(238)
00:12:51.827  fused_ordering(239)
00:12:51.827  fused_ordering(240)
00:12:51.827  fused_ordering(241)
00:12:51.827  fused_ordering(242)
00:12:51.827  fused_ordering(243)
00:12:51.827  fused_ordering(244)
00:12:51.827  fused_ordering(245)
00:12:51.827  fused_ordering(246)
00:12:51.827  fused_ordering(247)
00:12:51.827  fused_ordering(248)
00:12:51.827  fused_ordering(249)
00:12:51.827  fused_ordering(250)
00:12:51.827  fused_ordering(251)
00:12:51.827  fused_ordering(252)
00:12:51.827  fused_ordering(253)
00:12:51.827  fused_ordering(254)
00:12:51.827  fused_ordering(255)
00:12:51.827  fused_ordering(256)
00:12:51.827  fused_ordering(257)
00:12:51.827  fused_ordering(258)
00:12:51.827  fused_ordering(259)
00:12:51.827  fused_ordering(260)
00:12:51.827  fused_ordering(261)
00:12:51.827  fused_ordering(262)
00:12:51.827  fused_ordering(263)
00:12:51.827  fused_ordering(264)
00:12:51.827  fused_ordering(265)
00:12:51.827  fused_ordering(266)
00:12:51.827  fused_ordering(267)
00:12:51.827  fused_ordering(268)
00:12:51.827  fused_ordering(269)
00:12:51.827  fused_ordering(270)
00:12:51.827  fused_ordering(271)
00:12:51.827  fused_ordering(272)
00:12:51.827  fused_ordering(273)
00:12:51.827  fused_ordering(274)
00:12:51.827  fused_ordering(275)
00:12:51.827  fused_ordering(276)
00:12:51.827  fused_ordering(277)
00:12:51.827  fused_ordering(278)
00:12:51.827  fused_ordering(279)
00:12:51.827  fused_ordering(280)
00:12:51.827  fused_ordering(281)
00:12:51.827  fused_ordering(282)
00:12:51.827  fused_ordering(283)
00:12:51.827  fused_ordering(284)
00:12:51.828  fused_ordering(285)
00:12:51.828  fused_ordering(286)
00:12:51.828  fused_ordering(287)
00:12:51.828  fused_ordering(288)
00:12:51.828  fused_ordering(289)
00:12:51.828  fused_ordering(290)
00:12:51.828  fused_ordering(291)
00:12:51.828  fused_ordering(292)
00:12:51.828  fused_ordering(293)
00:12:51.828  fused_ordering(294)
00:12:51.828  fused_ordering(295)
00:12:51.828  fused_ordering(296)
00:12:51.828  fused_ordering(297)
00:12:51.828  fused_ordering(298)
00:12:51.828  fused_ordering(299)
00:12:51.828  fused_ordering(300)
00:12:51.828  fused_ordering(301)
00:12:51.828  fused_ordering(302)
00:12:51.828  fused_ordering(303)
00:12:51.828  fused_ordering(304)
00:12:51.828  fused_ordering(305)
00:12:51.828  fused_ordering(306)
00:12:51.828  fused_ordering(307)
00:12:51.828  fused_ordering(308)
00:12:51.828  fused_ordering(309)
00:12:51.828  fused_ordering(310)
00:12:51.828  fused_ordering(311)
00:12:51.828  fused_ordering(312)
00:12:51.828  fused_ordering(313)
00:12:51.828  fused_ordering(314)
00:12:51.828  fused_ordering(315)
00:12:51.828  fused_ordering(316)
00:12:51.828  fused_ordering(317)
00:12:51.828  fused_ordering(318)
00:12:51.828  fused_ordering(319)
00:12:51.828  fused_ordering(320)
00:12:51.828  fused_ordering(321)
00:12:51.828  fused_ordering(322)
00:12:51.828  fused_ordering(323)
00:12:51.828  fused_ordering(324)
00:12:51.828  fused_ordering(325)
00:12:51.828  fused_ordering(326)
00:12:51.828  fused_ordering(327)
00:12:51.828  fused_ordering(328)
00:12:51.828  fused_ordering(329)
00:12:51.828  fused_ordering(330)
00:12:51.828  fused_ordering(331)
00:12:51.828  fused_ordering(332)
00:12:51.828  fused_ordering(333)
00:12:51.828  fused_ordering(334)
00:12:51.828  fused_ordering(335)
00:12:51.828  fused_ordering(336)
00:12:51.828  fused_ordering(337)
00:12:51.828  fused_ordering(338)
00:12:51.828  fused_ordering(339)
00:12:51.828  fused_ordering(340)
00:12:51.828  fused_ordering(341)
00:12:51.828  fused_ordering(342)
00:12:51.828  fused_ordering(343)
00:12:51.828  fused_ordering(344)
00:12:51.828  fused_ordering(345)
00:12:51.828  fused_ordering(346)
00:12:51.828  fused_ordering(347)
00:12:51.828  fused_ordering(348)
00:12:51.828  fused_ordering(349)
00:12:51.828  fused_ordering(350)
00:12:51.828  fused_ordering(351)
00:12:51.828  fused_ordering(352)
00:12:51.828  fused_ordering(353)
00:12:51.828  fused_ordering(354)
00:12:51.828  fused_ordering(355)
00:12:51.828  fused_ordering(356)
00:12:51.828  fused_ordering(357)
00:12:51.828  fused_ordering(358)
00:12:51.828  fused_ordering(359)
00:12:51.828  fused_ordering(360)
00:12:51.828  fused_ordering(361)
00:12:51.828  fused_ordering(362)
00:12:51.828  fused_ordering(363)
00:12:51.828  fused_ordering(364)
00:12:51.828  fused_ordering(365)
00:12:51.828  fused_ordering(366)
00:12:51.828  fused_ordering(367)
00:12:51.828  fused_ordering(368)
00:12:51.828  fused_ordering(369)
00:12:51.828  fused_ordering(370)
00:12:51.828  fused_ordering(371)
00:12:51.828  fused_ordering(372)
00:12:51.828  fused_ordering(373)
00:12:51.828  fused_ordering(374)
00:12:51.828  fused_ordering(375)
00:12:51.828  fused_ordering(376)
00:12:51.828  fused_ordering(377)
00:12:51.828  fused_ordering(378)
00:12:51.828  fused_ordering(379)
00:12:51.828  fused_ordering(380)
00:12:51.828  fused_ordering(381)
00:12:51.828  fused_ordering(382)
00:12:51.828  fused_ordering(383)
00:12:51.828  fused_ordering(384)
00:12:51.828  fused_ordering(385)
00:12:51.828  fused_ordering(386)
00:12:51.828  fused_ordering(387)
00:12:51.828  fused_ordering(388)
00:12:51.828  fused_ordering(389)
00:12:51.828  fused_ordering(390)
00:12:51.828  fused_ordering(391)
00:12:51.828  fused_ordering(392)
00:12:51.828  fused_ordering(393)
00:12:51.828  fused_ordering(394)
00:12:51.828  fused_ordering(395)
00:12:51.828  fused_ordering(396)
00:12:51.828  fused_ordering(397)
00:12:51.828  fused_ordering(398)
00:12:51.828  fused_ordering(399)
00:12:51.828  fused_ordering(400)
00:12:51.828  fused_ordering(401)
00:12:51.828  fused_ordering(402)
00:12:51.828  fused_ordering(403)
00:12:51.828  fused_ordering(404)
00:12:51.828  fused_ordering(405)
00:12:51.828  fused_ordering(406)
00:12:51.828  fused_ordering(407)
00:12:51.828  fused_ordering(408)
00:12:51.828  fused_ordering(409)
00:12:51.828  fused_ordering(410)
00:12:52.396  fused_ordering(411)
00:12:52.396  fused_ordering(412)
00:12:52.396  fused_ordering(413)
00:12:52.396  fused_ordering(414)
00:12:52.396  fused_ordering(415)
00:12:52.396  fused_ordering(416)
00:12:52.396  fused_ordering(417)
00:12:52.396  fused_ordering(418)
00:12:52.396  fused_ordering(419)
00:12:52.396  fused_ordering(420)
00:12:52.396  fused_ordering(421)
00:12:52.396  fused_ordering(422)
00:12:52.396  fused_ordering(423)
00:12:52.396  fused_ordering(424)
00:12:52.396  fused_ordering(425)
00:12:52.396  fused_ordering(426)
00:12:52.396  fused_ordering(427)
00:12:52.396  fused_ordering(428)
00:12:52.396  fused_ordering(429)
00:12:52.396  fused_ordering(430)
00:12:52.396  fused_ordering(431)
00:12:52.396  fused_ordering(432)
00:12:52.396  fused_ordering(433)
00:12:52.396  fused_ordering(434)
00:12:52.396  fused_ordering(435)
00:12:52.396  fused_ordering(436)
00:12:52.396  fused_ordering(437)
00:12:52.396  fused_ordering(438)
00:12:52.396  fused_ordering(439)
00:12:52.396  fused_ordering(440)
00:12:52.396  fused_ordering(441)
00:12:52.396  fused_ordering(442)
00:12:52.396  fused_ordering(443)
00:12:52.396  fused_ordering(444)
00:12:52.396  fused_ordering(445)
00:12:52.396  fused_ordering(446)
00:12:52.396  fused_ordering(447)
00:12:52.396  fused_ordering(448)
00:12:52.396  fused_ordering(449)
00:12:52.396  fused_ordering(450)
00:12:52.396  fused_ordering(451)
00:12:52.396  fused_ordering(452)
00:12:52.396  fused_ordering(453)
00:12:52.396  fused_ordering(454)
00:12:52.396  fused_ordering(455)
00:12:52.396  fused_ordering(456)
00:12:52.396  fused_ordering(457)
00:12:52.396  fused_ordering(458)
00:12:52.396  fused_ordering(459)
00:12:52.396  fused_ordering(460)
00:12:52.396  fused_ordering(461)
00:12:52.396  fused_ordering(462)
00:12:52.396  fused_ordering(463)
00:12:52.396  fused_ordering(464)
00:12:52.396  fused_ordering(465)
00:12:52.396  fused_ordering(466)
00:12:52.396  fused_ordering(467)
00:12:52.396  fused_ordering(468)
00:12:52.396  fused_ordering(469)
00:12:52.396  fused_ordering(470)
00:12:52.396  fused_ordering(471)
00:12:52.396  fused_ordering(472)
00:12:52.396  fused_ordering(473)
00:12:52.396  fused_ordering(474)
00:12:52.396  fused_ordering(475)
00:12:52.396  fused_ordering(476)
00:12:52.396  fused_ordering(477)
00:12:52.396  fused_ordering(478)
00:12:52.396  fused_ordering(479)
00:12:52.396  fused_ordering(480)
00:12:52.396  fused_ordering(481)
00:12:52.396  fused_ordering(482)
00:12:52.396  fused_ordering(483)
00:12:52.396  fused_ordering(484)
00:12:52.396  fused_ordering(485)
00:12:52.396  fused_ordering(486)
00:12:52.396  fused_ordering(487)
00:12:52.396  fused_ordering(488)
00:12:52.396  fused_ordering(489)
00:12:52.396  fused_ordering(490)
00:12:52.396  fused_ordering(491)
00:12:52.396  fused_ordering(492)
00:12:52.396  fused_ordering(493)
00:12:52.396  fused_ordering(494)
00:12:52.396  fused_ordering(495)
00:12:52.396  fused_ordering(496)
00:12:52.396  fused_ordering(497)
00:12:52.396  fused_ordering(498)
00:12:52.396  fused_ordering(499)
00:12:52.396  fused_ordering(500)
00:12:52.396  fused_ordering(501)
00:12:52.396  fused_ordering(502)
00:12:52.396  fused_ordering(503)
00:12:52.396  fused_ordering(504)
00:12:52.396  fused_ordering(505)
00:12:52.396  fused_ordering(506)
00:12:52.396  fused_ordering(507)
00:12:52.396  fused_ordering(508)
00:12:52.396  fused_ordering(509)
00:12:52.396  fused_ordering(510)
00:12:52.396  fused_ordering(511)
00:12:52.396  fused_ordering(512)
00:12:52.396  fused_ordering(513)
00:12:52.396  fused_ordering(514)
00:12:52.396  fused_ordering(515)
00:12:52.396  fused_ordering(516)
00:12:52.396  fused_ordering(517)
00:12:52.396  fused_ordering(518)
00:12:52.396  fused_ordering(519)
00:12:52.396  fused_ordering(520)
00:12:52.396  fused_ordering(521)
00:12:52.396  fused_ordering(522)
00:12:52.396  fused_ordering(523)
00:12:52.396  fused_ordering(524)
00:12:52.396  fused_ordering(525)
00:12:52.396  fused_ordering(526)
00:12:52.396  fused_ordering(527)
00:12:52.396  fused_ordering(528)
00:12:52.396  fused_ordering(529)
00:12:52.396  fused_ordering(530)
00:12:52.396  fused_ordering(531)
00:12:52.396  fused_ordering(532)
00:12:52.397  fused_ordering(533)
00:12:52.397  fused_ordering(534)
00:12:52.397  fused_ordering(535)
00:12:52.397  fused_ordering(536)
00:12:52.397  fused_ordering(537)
00:12:52.397  fused_ordering(538)
00:12:52.397  fused_ordering(539)
00:12:52.397  fused_ordering(540)
00:12:52.397  fused_ordering(541)
00:12:52.397  fused_ordering(542)
00:12:52.397  fused_ordering(543)
00:12:52.397  fused_ordering(544)
00:12:52.397  fused_ordering(545)
00:12:52.397  fused_ordering(546)
00:12:52.397  fused_ordering(547)
00:12:52.397  fused_ordering(548)
00:12:52.397  fused_ordering(549)
00:12:52.397  fused_ordering(550)
00:12:52.397  fused_ordering(551)
00:12:52.397  fused_ordering(552)
00:12:52.397  fused_ordering(553)
00:12:52.397  fused_ordering(554)
00:12:52.397  fused_ordering(555)
00:12:52.397  fused_ordering(556)
00:12:52.397  fused_ordering(557)
00:12:52.397  fused_ordering(558)
00:12:52.397  fused_ordering(559)
00:12:52.397  fused_ordering(560)
00:12:52.397  fused_ordering(561)
00:12:52.397  fused_ordering(562)
00:12:52.397  fused_ordering(563)
00:12:52.397  fused_ordering(564)
00:12:52.397  fused_ordering(565)
00:12:52.397  fused_ordering(566)
00:12:52.397  fused_ordering(567)
00:12:52.397  fused_ordering(568)
00:12:52.397  fused_ordering(569)
00:12:52.397  fused_ordering(570)
00:12:52.397  fused_ordering(571)
00:12:52.397  fused_ordering(572)
00:12:52.397  fused_ordering(573)
00:12:52.397  fused_ordering(574)
00:12:52.397  fused_ordering(575)
00:12:52.397  fused_ordering(576)
00:12:52.397  fused_ordering(577)
00:12:52.397  fused_ordering(578)
00:12:52.397  fused_ordering(579)
00:12:52.397  fused_ordering(580)
00:12:52.397  fused_ordering(581)
00:12:52.397  fused_ordering(582)
00:12:52.397  fused_ordering(583)
00:12:52.397  fused_ordering(584)
00:12:52.397  fused_ordering(585)
00:12:52.397  fused_ordering(586)
00:12:52.397  fused_ordering(587)
00:12:52.397  fused_ordering(588)
00:12:52.397  fused_ordering(589)
00:12:52.397  fused_ordering(590)
00:12:52.397  fused_ordering(591)
00:12:52.397  fused_ordering(592)
00:12:52.397  fused_ordering(593)
00:12:52.397  fused_ordering(594)
00:12:52.397  fused_ordering(595)
00:12:52.397  fused_ordering(596)
00:12:52.397  fused_ordering(597)
00:12:52.397  fused_ordering(598)
00:12:52.397  fused_ordering(599)
00:12:52.397  fused_ordering(600)
00:12:52.397  fused_ordering(601)
00:12:52.397  fused_ordering(602)
00:12:52.397  fused_ordering(603)
00:12:52.397  fused_ordering(604)
00:12:52.397  fused_ordering(605)
00:12:52.397  fused_ordering(606)
00:12:52.397  fused_ordering(607)
00:12:52.397  fused_ordering(608)
00:12:52.397  fused_ordering(609)
00:12:52.397  fused_ordering(610)
00:12:52.397  fused_ordering(611)
00:12:52.397  fused_ordering(612)
00:12:52.397  fused_ordering(613)
00:12:52.397  fused_ordering(614)
00:12:52.397  fused_ordering(615)
00:12:52.656  fused_ordering(616)
00:12:52.656  fused_ordering(617)
00:12:52.656  fused_ordering(618)
00:12:52.656  fused_ordering(619)
00:12:52.656  fused_ordering(620)
00:12:52.656  fused_ordering(621)
00:12:52.656  fused_ordering(622)
00:12:52.656  fused_ordering(623)
00:12:52.656  fused_ordering(624)
00:12:52.656  fused_ordering(625)
00:12:52.656  fused_ordering(626)
00:12:52.656  fused_ordering(627)
00:12:52.656  fused_ordering(628)
00:12:52.656  fused_ordering(629)
00:12:52.656  fused_ordering(630)
00:12:52.656  fused_ordering(631)
00:12:52.656  fused_ordering(632)
00:12:52.656  fused_ordering(633)
00:12:52.656  fused_ordering(634)
00:12:52.656  fused_ordering(635)
00:12:52.656  fused_ordering(636)
00:12:52.656  fused_ordering(637)
00:12:52.656  fused_ordering(638)
00:12:52.656  fused_ordering(639)
00:12:52.656  fused_ordering(640)
00:12:52.656  fused_ordering(641)
00:12:52.656  fused_ordering(642)
00:12:52.656  fused_ordering(643)
00:12:52.656  fused_ordering(644)
00:12:52.656  fused_ordering(645)
00:12:52.656  fused_ordering(646)
00:12:52.656  fused_ordering(647)
00:12:52.656  fused_ordering(648)
00:12:52.656  fused_ordering(649)
00:12:52.656  fused_ordering(650)
00:12:52.656  fused_ordering(651)
00:12:52.656  fused_ordering(652)
00:12:52.656  fused_ordering(653)
00:12:52.656  fused_ordering(654)
00:12:52.656  fused_ordering(655)
00:12:52.656  fused_ordering(656)
00:12:52.656  fused_ordering(657)
00:12:52.656  fused_ordering(658)
00:12:52.656  fused_ordering(659)
00:12:52.656  fused_ordering(660)
00:12:52.656  fused_ordering(661)
00:12:52.656  fused_ordering(662)
00:12:52.656  fused_ordering(663)
00:12:52.656  fused_ordering(664)
00:12:52.656  fused_ordering(665)
00:12:52.656  fused_ordering(666)
00:12:52.656  fused_ordering(667)
00:12:52.656  fused_ordering(668)
00:12:52.656  fused_ordering(669)
00:12:52.656  fused_ordering(670)
00:12:52.656  fused_ordering(671)
00:12:52.656  fused_ordering(672)
00:12:52.656  fused_ordering(673)
00:12:52.656  fused_ordering(674)
00:12:52.656  fused_ordering(675)
00:12:52.656  fused_ordering(676)
00:12:52.656  fused_ordering(677)
00:12:52.656  fused_ordering(678)
00:12:52.656  fused_ordering(679)
00:12:52.656  fused_ordering(680)
00:12:52.656  fused_ordering(681)
00:12:52.656  fused_ordering(682)
00:12:52.656  fused_ordering(683)
00:12:52.656  fused_ordering(684)
00:12:52.656  fused_ordering(685)
00:12:52.656  fused_ordering(686)
00:12:52.656  fused_ordering(687)
00:12:52.656  fused_ordering(688)
00:12:52.656  fused_ordering(689)
00:12:52.656  fused_ordering(690)
00:12:52.656  fused_ordering(691)
00:12:52.656  fused_ordering(692)
00:12:52.656  fused_ordering(693)
00:12:52.656  fused_ordering(694)
00:12:52.656  fused_ordering(695)
00:12:52.656  fused_ordering(696)
00:12:52.656  fused_ordering(697)
00:12:52.656  fused_ordering(698)
00:12:52.656  fused_ordering(699)
00:12:52.656  fused_ordering(700)
00:12:52.656  fused_ordering(701)
00:12:52.656  fused_ordering(702)
00:12:52.656  fused_ordering(703)
00:12:52.656  fused_ordering(704)
00:12:52.656  fused_ordering(705)
00:12:52.656  fused_ordering(706)
00:12:52.656  fused_ordering(707)
00:12:52.656  fused_ordering(708)
00:12:52.656  fused_ordering(709)
00:12:52.656  fused_ordering(710)
00:12:52.656  fused_ordering(711)
00:12:52.656  fused_ordering(712)
00:12:52.656  fused_ordering(713)
00:12:52.656  fused_ordering(714)
00:12:52.656  fused_ordering(715)
00:12:52.656  fused_ordering(716)
00:12:52.656  fused_ordering(717)
00:12:52.656  fused_ordering(718)
00:12:52.656  fused_ordering(719)
00:12:52.656  fused_ordering(720)
00:12:52.656  fused_ordering(721)
00:12:52.656  fused_ordering(722)
00:12:52.656  fused_ordering(723)
00:12:52.656  fused_ordering(724)
00:12:52.656  fused_ordering(725)
00:12:52.656  fused_ordering(726)
00:12:52.656  fused_ordering(727)
00:12:52.656  fused_ordering(728)
00:12:52.656  fused_ordering(729)
00:12:52.656  fused_ordering(730)
00:12:52.656  fused_ordering(731)
00:12:52.656  fused_ordering(732)
00:12:52.656  fused_ordering(733)
00:12:52.656  fused_ordering(734)
00:12:52.656  fused_ordering(735)
00:12:52.656  fused_ordering(736)
00:12:52.656  fused_ordering(737)
00:12:52.656  fused_ordering(738)
00:12:52.656  fused_ordering(739)
00:12:52.656  fused_ordering(740)
00:12:52.656  fused_ordering(741)
00:12:52.656  fused_ordering(742)
00:12:52.656  fused_ordering(743)
00:12:52.656  fused_ordering(744)
00:12:52.656  fused_ordering(745)
00:12:52.656  fused_ordering(746)
00:12:52.656  fused_ordering(747)
00:12:52.656  fused_ordering(748)
00:12:52.656  fused_ordering(749)
00:12:52.656  fused_ordering(750)
00:12:52.656  fused_ordering(751)
00:12:52.656  fused_ordering(752)
00:12:52.656  fused_ordering(753)
00:12:52.656  fused_ordering(754)
00:12:52.656  fused_ordering(755)
00:12:52.656  fused_ordering(756)
00:12:52.656  fused_ordering(757)
00:12:52.656  fused_ordering(758)
00:12:52.656  fused_ordering(759)
00:12:52.656  fused_ordering(760)
00:12:52.656  fused_ordering(761)
00:12:52.656  fused_ordering(762)
00:12:52.656  fused_ordering(763)
00:12:52.656  fused_ordering(764)
00:12:52.656  fused_ordering(765)
00:12:52.656  fused_ordering(766)
00:12:52.656  fused_ordering(767)
00:12:52.656  fused_ordering(768)
00:12:52.656  fused_ordering(769)
00:12:52.656  fused_ordering(770)
00:12:52.656  fused_ordering(771)
00:12:52.656  fused_ordering(772)
00:12:52.656  fused_ordering(773)
00:12:52.656  fused_ordering(774)
00:12:52.656  fused_ordering(775)
00:12:52.656  fused_ordering(776)
00:12:52.656  fused_ordering(777)
00:12:52.656  fused_ordering(778)
00:12:52.656  fused_ordering(779)
00:12:52.656  fused_ordering(780)
00:12:52.656  fused_ordering(781)
00:12:52.656  fused_ordering(782)
00:12:52.656  fused_ordering(783)
00:12:52.656  fused_ordering(784)
00:12:52.656  fused_ordering(785)
00:12:52.656  fused_ordering(786)
00:12:52.656  fused_ordering(787)
00:12:52.656  fused_ordering(788)
00:12:52.656  fused_ordering(789)
00:12:52.656  fused_ordering(790)
00:12:52.656  fused_ordering(791)
00:12:52.656  fused_ordering(792)
00:12:52.656  fused_ordering(793)
00:12:52.656  fused_ordering(794)
00:12:52.656  fused_ordering(795)
00:12:52.656  fused_ordering(796)
00:12:52.656  fused_ordering(797)
00:12:52.656  fused_ordering(798)
00:12:52.656  fused_ordering(799)
00:12:52.656  fused_ordering(800)
00:12:52.656  fused_ordering(801)
00:12:52.656  fused_ordering(802)
00:12:52.656  fused_ordering(803)
00:12:52.656  fused_ordering(804)
00:12:52.656  fused_ordering(805)
00:12:52.656  fused_ordering(806)
00:12:52.656  fused_ordering(807)
00:12:52.656  fused_ordering(808)
00:12:52.656  fused_ordering(809)
00:12:52.656  fused_ordering(810)
00:12:52.656  fused_ordering(811)
00:12:52.656  fused_ordering(812)
00:12:52.656  fused_ordering(813)
00:12:52.656  fused_ordering(814)
00:12:52.656  fused_ordering(815)
00:12:52.656  fused_ordering(816)
00:12:52.656  fused_ordering(817)
00:12:52.656  fused_ordering(818)
00:12:52.656  fused_ordering(819)
00:12:52.656  fused_ordering(820)
00:12:53.224  fused_ordering(821)
00:12:53.224  fused_ordering(822)
00:12:53.224  fused_ordering(823)
00:12:53.224  fused_ordering(824)
00:12:53.224  fused_ordering(825)
00:12:53.224  fused_ordering(826)
00:12:53.224  fused_ordering(827)
00:12:53.224  fused_ordering(828)
00:12:53.224  fused_ordering(829)
00:12:53.224  fused_ordering(830)
00:12:53.224  fused_ordering(831)
00:12:53.224  fused_ordering(832)
00:12:53.224  fused_ordering(833)
00:12:53.224  fused_ordering(834)
00:12:53.224  fused_ordering(835)
00:12:53.224  fused_ordering(836)
00:12:53.224  fused_ordering(837)
00:12:53.224  fused_ordering(838)
00:12:53.224  fused_ordering(839)
00:12:53.224  fused_ordering(840)
00:12:53.224  fused_ordering(841)
00:12:53.224  fused_ordering(842)
00:12:53.224  fused_ordering(843)
00:12:53.224  fused_ordering(844)
00:12:53.224  fused_ordering(845)
00:12:53.224  fused_ordering(846)
00:12:53.224  fused_ordering(847)
00:12:53.224  fused_ordering(848)
00:12:53.224  fused_ordering(849)
00:12:53.224  fused_ordering(850)
00:12:53.224  fused_ordering(851)
00:12:53.224  fused_ordering(852)
00:12:53.224  fused_ordering(853)
00:12:53.224  fused_ordering(854)
00:12:53.224  fused_ordering(855)
00:12:53.224  fused_ordering(856)
00:12:53.225  fused_ordering(857)
00:12:53.225  fused_ordering(858)
00:12:53.225  fused_ordering(859)
00:12:53.225  fused_ordering(860)
00:12:53.225  fused_ordering(861)
00:12:53.225  fused_ordering(862)
00:12:53.225  fused_ordering(863)
00:12:53.225  fused_ordering(864)
00:12:53.225  fused_ordering(865)
00:12:53.225  fused_ordering(866)
00:12:53.225  fused_ordering(867)
00:12:53.225  fused_ordering(868)
00:12:53.225  fused_ordering(869)
00:12:53.225  fused_ordering(870)
00:12:53.225  fused_ordering(871)
00:12:53.225  fused_ordering(872)
00:12:53.225  fused_ordering(873)
00:12:53.225  fused_ordering(874)
00:12:53.225  fused_ordering(875)
00:12:53.225  fused_ordering(876)
00:12:53.225  fused_ordering(877)
00:12:53.225  fused_ordering(878)
00:12:53.225  fused_ordering(879)
00:12:53.225  fused_ordering(880)
00:12:53.225  fused_ordering(881)
00:12:53.225  fused_ordering(882)
00:12:53.225  fused_ordering(883)
00:12:53.225  fused_ordering(884)
00:12:53.225  fused_ordering(885)
00:12:53.225  fused_ordering(886)
00:12:53.225  fused_ordering(887)
00:12:53.225  fused_ordering(888)
00:12:53.225  fused_ordering(889)
00:12:53.225  fused_ordering(890)
00:12:53.225  fused_ordering(891)
00:12:53.225  fused_ordering(892)
00:12:53.225  fused_ordering(893)
00:12:53.225  fused_ordering(894)
00:12:53.225  fused_ordering(895)
00:12:53.225  fused_ordering(896)
00:12:53.225  fused_ordering(897)
00:12:53.225  fused_ordering(898)
00:12:53.225  fused_ordering(899)
00:12:53.225  fused_ordering(900)
00:12:53.225  fused_ordering(901)
00:12:53.225  fused_ordering(902)
00:12:53.225  fused_ordering(903)
00:12:53.225  fused_ordering(904)
00:12:53.225  fused_ordering(905)
00:12:53.225  fused_ordering(906)
00:12:53.225  fused_ordering(907)
00:12:53.225  fused_ordering(908)
00:12:53.225  fused_ordering(909)
00:12:53.225  fused_ordering(910)
00:12:53.225  fused_ordering(911)
00:12:53.225  fused_ordering(912)
00:12:53.225  fused_ordering(913)
00:12:53.225  fused_ordering(914)
00:12:53.225  fused_ordering(915)
00:12:53.225  fused_ordering(916)
00:12:53.225  fused_ordering(917)
00:12:53.225  fused_ordering(918)
00:12:53.225  fused_ordering(919)
00:12:53.225  fused_ordering(920)
00:12:53.225  fused_ordering(921)
00:12:53.225  fused_ordering(922)
00:12:53.225  fused_ordering(923)
00:12:53.225  fused_ordering(924)
00:12:53.225  fused_ordering(925)
00:12:53.225  fused_ordering(926)
00:12:53.225  fused_ordering(927)
00:12:53.225  fused_ordering(928)
00:12:53.225  fused_ordering(929)
00:12:53.225  fused_ordering(930)
00:12:53.225  fused_ordering(931)
00:12:53.225  fused_ordering(932)
00:12:53.225  fused_ordering(933)
00:12:53.225  fused_ordering(934)
00:12:53.225  fused_ordering(935)
00:12:53.225  fused_ordering(936)
00:12:53.225  fused_ordering(937)
00:12:53.225  fused_ordering(938)
00:12:53.225  fused_ordering(939)
00:12:53.225  fused_ordering(940)
00:12:53.225  fused_ordering(941)
00:12:53.225  fused_ordering(942)
00:12:53.225  fused_ordering(943)
00:12:53.225  fused_ordering(944)
00:12:53.225  fused_ordering(945)
00:12:53.225  fused_ordering(946)
00:12:53.225  fused_ordering(947)
00:12:53.225  fused_ordering(948)
00:12:53.225  fused_ordering(949)
00:12:53.225  fused_ordering(950)
00:12:53.225  fused_ordering(951)
00:12:53.225  fused_ordering(952)
00:12:53.225  fused_ordering(953)
00:12:53.225  fused_ordering(954)
00:12:53.225  fused_ordering(955)
00:12:53.225  fused_ordering(956)
00:12:53.225  fused_ordering(957)
00:12:53.225  fused_ordering(958)
00:12:53.225  fused_ordering(959)
00:12:53.225  fused_ordering(960)
00:12:53.225  fused_ordering(961)
00:12:53.225  fused_ordering(962)
00:12:53.225  fused_ordering(963)
00:12:53.225  fused_ordering(964)
00:12:53.225  fused_ordering(965)
00:12:53.225  fused_ordering(966)
00:12:53.225  fused_ordering(967)
00:12:53.225  fused_ordering(968)
00:12:53.225  fused_ordering(969)
00:12:53.225  fused_ordering(970)
00:12:53.225  fused_ordering(971)
00:12:53.225  fused_ordering(972)
00:12:53.225  fused_ordering(973)
00:12:53.225  fused_ordering(974)
00:12:53.225  fused_ordering(975)
00:12:53.225  fused_ordering(976)
00:12:53.225  fused_ordering(977)
00:12:53.225  fused_ordering(978)
00:12:53.225  fused_ordering(979)
00:12:53.225  fused_ordering(980)
00:12:53.225  fused_ordering(981)
00:12:53.225  fused_ordering(982)
00:12:53.225  fused_ordering(983)
00:12:53.225  fused_ordering(984)
00:12:53.225  fused_ordering(985)
00:12:53.225  fused_ordering(986)
00:12:53.225  fused_ordering(987)
00:12:53.225  fused_ordering(988)
00:12:53.225  fused_ordering(989)
00:12:53.225  fused_ordering(990)
00:12:53.225  fused_ordering(991)
00:12:53.225  fused_ordering(992)
00:12:53.225  fused_ordering(993)
00:12:53.225  fused_ordering(994)
00:12:53.225  fused_ordering(995)
00:12:53.225  fused_ordering(996)
00:12:53.225  fused_ordering(997)
00:12:53.225  fused_ordering(998)
00:12:53.225  fused_ordering(999)
00:12:53.225  fused_ordering(1000)
00:12:53.225  fused_ordering(1001)
00:12:53.225  fused_ordering(1002)
00:12:53.225  fused_ordering(1003)
00:12:53.225  fused_ordering(1004)
00:12:53.225  fused_ordering(1005)
00:12:53.225  fused_ordering(1006)
00:12:53.225  fused_ordering(1007)
00:12:53.225  fused_ordering(1008)
00:12:53.225  fused_ordering(1009)
00:12:53.225  fused_ordering(1010)
00:12:53.225  fused_ordering(1011)
00:12:53.225  fused_ordering(1012)
00:12:53.225  fused_ordering(1013)
00:12:53.225  fused_ordering(1014)
00:12:53.225  fused_ordering(1015)
00:12:53.225  fused_ordering(1016)
00:12:53.225  fused_ordering(1017)
00:12:53.225  fused_ordering(1018)
00:12:53.225  fused_ordering(1019)
00:12:53.225  fused_ordering(1020)
00:12:53.225  fused_ordering(1021)
00:12:53.225  fused_ordering(1022)
00:12:53.225  fused_ordering(1023)
00:12:53.225   06:23:09	-- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT
00:12:53.225   06:23:09	-- target/fused_ordering.sh@25 -- # nvmftestfini
00:12:53.225   06:23:09	-- nvmf/common.sh@476 -- # nvmfcleanup
00:12:53.225   06:23:09	-- nvmf/common.sh@116 -- # sync
00:12:53.225   06:23:09	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:12:53.225   06:23:09	-- nvmf/common.sh@119 -- # set +e
00:12:53.225   06:23:09	-- nvmf/common.sh@120 -- # for i in {1..20}
00:12:53.225   06:23:09	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:12:53.225  rmmod nvme_tcp
00:12:53.225  rmmod nvme_fabrics
00:12:53.225  rmmod nvme_keyring
00:12:53.225   06:23:10	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:12:53.225   06:23:10	-- nvmf/common.sh@123 -- # set -e
00:12:53.225   06:23:10	-- nvmf/common.sh@124 -- # return 0
00:12:53.225   06:23:10	-- nvmf/common.sh@477 -- # '[' -n 70157 ']'
00:12:53.225   06:23:10	-- nvmf/common.sh@478 -- # killprocess 70157
00:12:53.225   06:23:10	-- common/autotest_common.sh@936 -- # '[' -z 70157 ']'
00:12:53.225   06:23:10	-- common/autotest_common.sh@940 -- # kill -0 70157
00:12:53.225    06:23:10	-- common/autotest_common.sh@941 -- # uname
00:12:53.225   06:23:10	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:12:53.225    06:23:10	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70157
00:12:53.225  killing process with pid 70157
00:12:53.225   06:23:10	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:12:53.225   06:23:10	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:12:53.225   06:23:10	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 70157'
00:12:53.225   06:23:10	-- common/autotest_common.sh@955 -- # kill 70157
00:12:53.225   06:23:10	-- common/autotest_common.sh@960 -- # wait 70157
00:12:53.484   06:23:10	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:12:53.484   06:23:10	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:12:53.484   06:23:10	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:12:53.484   06:23:10	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:12:53.484   06:23:10	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:12:53.484   06:23:10	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:12:53.484   06:23:10	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:12:53.484    06:23:10	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:12:53.484   06:23:10	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:12:53.484  
00:12:53.484  real	0m4.116s
00:12:53.484  user	0m4.581s
00:12:53.484  sys	0m1.449s
00:12:53.484   06:23:10	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:53.484  ************************************
00:12:53.484  END TEST nvmf_fused_ordering
00:12:53.484  ************************************
00:12:53.484   06:23:10	-- common/autotest_common.sh@10 -- # set +x
00:12:53.743   06:23:10	-- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp
00:12:53.743   06:23:10	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:12:53.743   06:23:10	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:53.743   06:23:10	-- common/autotest_common.sh@10 -- # set +x
00:12:53.743  ************************************
00:12:53.743  START TEST nvmf_delete_subsystem
00:12:53.743  ************************************
00:12:53.743   06:23:10	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp
00:12:53.743  * Looking for test storage...
00:12:53.743  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:12:53.743    06:23:10	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:12:53.744     06:23:10	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:12:53.744     06:23:10	-- common/autotest_common.sh@1690 -- # lcov --version
00:12:53.744    06:23:10	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:12:53.744    06:23:10	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:12:53.744    06:23:10	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:12:53.744    06:23:10	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:12:53.744    06:23:10	-- scripts/common.sh@335 -- # IFS=.-:
00:12:53.744    06:23:10	-- scripts/common.sh@335 -- # read -ra ver1
00:12:53.744    06:23:10	-- scripts/common.sh@336 -- # IFS=.-:
00:12:53.744    06:23:10	-- scripts/common.sh@336 -- # read -ra ver2
00:12:53.744    06:23:10	-- scripts/common.sh@337 -- # local 'op=<'
00:12:53.744    06:23:10	-- scripts/common.sh@339 -- # ver1_l=2
00:12:53.744    06:23:10	-- scripts/common.sh@340 -- # ver2_l=1
00:12:53.744    06:23:10	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:12:53.744    06:23:10	-- scripts/common.sh@343 -- # case "$op" in
00:12:53.744    06:23:10	-- scripts/common.sh@344 -- # : 1
00:12:53.744    06:23:10	-- scripts/common.sh@363 -- # (( v = 0 ))
00:12:53.744    06:23:10	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:53.744     06:23:10	-- scripts/common.sh@364 -- # decimal 1
00:12:53.744     06:23:10	-- scripts/common.sh@352 -- # local d=1
00:12:53.744     06:23:10	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:53.744     06:23:10	-- scripts/common.sh@354 -- # echo 1
00:12:53.744    06:23:10	-- scripts/common.sh@364 -- # ver1[v]=1
00:12:53.744     06:23:10	-- scripts/common.sh@365 -- # decimal 2
00:12:53.744     06:23:10	-- scripts/common.sh@352 -- # local d=2
00:12:53.744     06:23:10	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:53.744     06:23:10	-- scripts/common.sh@354 -- # echo 2
00:12:53.744    06:23:10	-- scripts/common.sh@365 -- # ver2[v]=2
00:12:53.744    06:23:10	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:12:53.744    06:23:10	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:12:53.744    06:23:10	-- scripts/common.sh@367 -- # return 0
00:12:53.744    06:23:10	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:53.744    06:23:10	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:12:53.744  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:53.744  		--rc genhtml_branch_coverage=1
00:12:53.744  		--rc genhtml_function_coverage=1
00:12:53.744  		--rc genhtml_legend=1
00:12:53.744  		--rc geninfo_all_blocks=1
00:12:53.744  		--rc geninfo_unexecuted_blocks=1
00:12:53.744  		
00:12:53.744  		'
00:12:53.744    06:23:10	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:12:53.744  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:53.744  		--rc genhtml_branch_coverage=1
00:12:53.744  		--rc genhtml_function_coverage=1
00:12:53.744  		--rc genhtml_legend=1
00:12:53.744  		--rc geninfo_all_blocks=1
00:12:53.744  		--rc geninfo_unexecuted_blocks=1
00:12:53.744  		
00:12:53.744  		'
00:12:53.744    06:23:10	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:12:53.744  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:53.744  		--rc genhtml_branch_coverage=1
00:12:53.744  		--rc genhtml_function_coverage=1
00:12:53.744  		--rc genhtml_legend=1
00:12:53.744  		--rc geninfo_all_blocks=1
00:12:53.744  		--rc geninfo_unexecuted_blocks=1
00:12:53.744  		
00:12:53.744  		'
00:12:53.744    06:23:10	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:12:53.744  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:53.744  		--rc genhtml_branch_coverage=1
00:12:53.744  		--rc genhtml_function_coverage=1
00:12:53.744  		--rc genhtml_legend=1
00:12:53.744  		--rc geninfo_all_blocks=1
00:12:53.744  		--rc geninfo_unexecuted_blocks=1
00:12:53.744  		
00:12:53.744  		'
00:12:53.744   06:23:10	-- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:12:53.744     06:23:10	-- nvmf/common.sh@7 -- # uname -s
00:12:53.744    06:23:10	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:12:53.744    06:23:10	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:12:53.744    06:23:10	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:12:53.744    06:23:10	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:12:53.744    06:23:10	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:12:53.744    06:23:10	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:12:53.744    06:23:10	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:12:53.744    06:23:10	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:12:53.744    06:23:10	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:12:53.744     06:23:10	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:12:53.744    06:23:10	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:12:53.744    06:23:10	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:12:53.744    06:23:10	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:12:53.744    06:23:10	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:12:53.744    06:23:10	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:12:53.744    06:23:10	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:12:53.744     06:23:10	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:53.744     06:23:10	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:53.744     06:23:10	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:53.744      06:23:10	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:53.744      06:23:10	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:53.744      06:23:10	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:53.744      06:23:10	-- paths/export.sh@5 -- # export PATH
00:12:53.744      06:23:10	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:53.744    06:23:10	-- nvmf/common.sh@46 -- # : 0
00:12:53.744    06:23:10	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:12:53.744    06:23:10	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:12:53.744    06:23:10	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:12:53.744    06:23:10	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:12:53.744    06:23:10	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:12:53.744    06:23:10	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:12:53.744    06:23:10	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:12:53.744    06:23:10	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:12:53.744   06:23:10	-- target/delete_subsystem.sh@12 -- # nvmftestinit
00:12:53.744   06:23:10	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:12:53.744   06:23:10	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:12:53.744   06:23:10	-- nvmf/common.sh@436 -- # prepare_net_devs
00:12:53.744   06:23:10	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:12:53.744   06:23:10	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:12:53.744   06:23:10	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:12:53.744   06:23:10	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:12:53.744    06:23:10	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:12:53.744   06:23:10	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:12:53.744   06:23:10	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:12:53.744   06:23:10	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:12:53.744   06:23:10	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:12:53.744   06:23:10	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:12:53.744   06:23:10	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:12:53.744   06:23:10	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:12:53.744   06:23:10	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:12:53.744   06:23:10	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:12:53.744   06:23:10	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:12:53.744   06:23:10	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:12:53.744   06:23:10	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:12:53.744   06:23:10	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:12:53.744   06:23:10	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:12:53.744   06:23:10	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:12:53.744   06:23:10	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:12:53.744   06:23:10	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:12:53.744   06:23:10	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:12:53.744   06:23:10	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:12:54.003   06:23:10	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:12:54.003  Cannot find device "nvmf_tgt_br"
00:12:54.003   06:23:10	-- nvmf/common.sh@154 -- # true
00:12:54.003   06:23:10	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:12:54.003  Cannot find device "nvmf_tgt_br2"
00:12:54.003   06:23:10	-- nvmf/common.sh@155 -- # true
00:12:54.003   06:23:10	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:12:54.003   06:23:10	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:12:54.003  Cannot find device "nvmf_tgt_br"
00:12:54.003   06:23:10	-- nvmf/common.sh@157 -- # true
00:12:54.003   06:23:10	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:12:54.003  Cannot find device "nvmf_tgt_br2"
00:12:54.003   06:23:10	-- nvmf/common.sh@158 -- # true
00:12:54.003   06:23:10	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:12:54.003   06:23:10	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:12:54.003   06:23:10	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:12:54.003  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:12:54.003   06:23:10	-- nvmf/common.sh@161 -- # true
00:12:54.003   06:23:10	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:12:54.003  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:12:54.003   06:23:10	-- nvmf/common.sh@162 -- # true
00:12:54.003   06:23:10	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:12:54.003   06:23:10	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:12:54.003   06:23:10	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:12:54.003   06:23:10	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:12:54.003   06:23:10	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:12:54.003   06:23:10	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:12:54.003   06:23:10	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:12:54.003   06:23:10	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:12:54.003   06:23:10	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:12:54.003   06:23:10	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:12:54.003   06:23:10	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:12:54.003   06:23:10	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:12:54.003   06:23:10	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:12:54.003   06:23:10	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:12:54.003   06:23:10	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:12:54.003   06:23:10	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:12:54.003   06:23:10	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:12:54.003   06:23:10	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:12:54.003   06:23:10	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:12:54.003   06:23:10	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:12:54.262   06:23:10	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:12:54.262   06:23:10	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:12:54.262   06:23:11	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:12:54.262   06:23:11	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:12:54.262  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:12:54.262  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms
00:12:54.262  
00:12:54.262  --- 10.0.0.2 ping statistics ---
00:12:54.262  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:54.262  rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms
00:12:54.262   06:23:11	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:12:54.262  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:12:54.262  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms
00:12:54.262  
00:12:54.262  --- 10.0.0.3 ping statistics ---
00:12:54.262  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:54.262  rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms
00:12:54.262   06:23:11	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:12:54.262  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:12:54.262  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms
00:12:54.262  
00:12:54.262  --- 10.0.0.1 ping statistics ---
00:12:54.262  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:54.262  rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms
00:12:54.262   06:23:11	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:12:54.262   06:23:11	-- nvmf/common.sh@421 -- # return 0
00:12:54.262   06:23:11	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:12:54.262   06:23:11	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:12:54.262   06:23:11	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:12:54.262   06:23:11	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:12:54.262   06:23:11	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:12:54.262   06:23:11	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:12:54.262   06:23:11	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:12:54.262   06:23:11	-- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3
00:12:54.262   06:23:11	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:12:54.262   06:23:11	-- common/autotest_common.sh@722 -- # xtrace_disable
00:12:54.262   06:23:11	-- common/autotest_common.sh@10 -- # set +x
00:12:54.262  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:54.262   06:23:11	-- nvmf/common.sh@469 -- # nvmfpid=70432
00:12:54.262   06:23:11	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3
00:12:54.262   06:23:11	-- nvmf/common.sh@470 -- # waitforlisten 70432
00:12:54.262   06:23:11	-- common/autotest_common.sh@829 -- # '[' -z 70432 ']'
00:12:54.262   06:23:11	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:54.262   06:23:11	-- common/autotest_common.sh@834 -- # local max_retries=100
00:12:54.262   06:23:11	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:54.262   06:23:11	-- common/autotest_common.sh@838 -- # xtrace_disable
00:12:54.262   06:23:11	-- common/autotest_common.sh@10 -- # set +x
00:12:54.262  [2024-12-16 06:23:11.104757] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:12:54.262  [2024-12-16 06:23:11.104842] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:12:54.581  [2024-12-16 06:23:11.246651] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:12:54.582  [2024-12-16 06:23:11.350251] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:12:54.582  [2024-12-16 06:23:11.350678] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:12:54.582  [2024-12-16 06:23:11.350827] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:12:54.582  [2024-12-16 06:23:11.350968] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:12:54.582  [2024-12-16 06:23:11.351233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:12:54.582  [2024-12-16 06:23:11.351246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:12:55.147   06:23:12	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:12:55.147   06:23:12	-- common/autotest_common.sh@862 -- # return 0
00:12:55.147   06:23:12	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:12:55.147   06:23:12	-- common/autotest_common.sh@728 -- # xtrace_disable
00:12:55.147   06:23:12	-- common/autotest_common.sh@10 -- # set +x
00:12:55.405   06:23:12	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:12:55.405   06:23:12	-- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:12:55.405   06:23:12	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:55.405   06:23:12	-- common/autotest_common.sh@10 -- # set +x
00:12:55.405  [2024-12-16 06:23:12.159324] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:12:55.405   06:23:12	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:55.405   06:23:12	-- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:12:55.405   06:23:12	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:55.405   06:23:12	-- common/autotest_common.sh@10 -- # set +x
00:12:55.405   06:23:12	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:55.405   06:23:12	-- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:12:55.405   06:23:12	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:55.405   06:23:12	-- common/autotest_common.sh@10 -- # set +x
00:12:55.405  [2024-12-16 06:23:12.175457] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:12:55.405   06:23:12	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:55.405   06:23:12	-- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512
00:12:55.405   06:23:12	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:55.405   06:23:12	-- common/autotest_common.sh@10 -- # set +x
00:12:55.405  NULL1
00:12:55.405   06:23:12	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:55.405   06:23:12	-- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:12:55.405   06:23:12	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:55.405   06:23:12	-- common/autotest_common.sh@10 -- # set +x
00:12:55.405  Delay0
00:12:55.405   06:23:12	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:55.405   06:23:12	-- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:55.405   06:23:12	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:55.405   06:23:12	-- common/autotest_common.sh@10 -- # set +x
00:12:55.405   06:23:12	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:55.405   06:23:12	-- target/delete_subsystem.sh@28 -- # perf_pid=70483
00:12:55.406   06:23:12	-- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4
00:12:55.406   06:23:12	-- target/delete_subsystem.sh@30 -- # sleep 2
00:12:55.406  [2024-12-16 06:23:12.370171] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:12:57.306   06:23:14	-- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:12:57.306   06:23:14	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:57.306   06:23:14	-- common/autotest_common.sh@10 -- # set +x
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  starting I/O failed: -6
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  starting I/O failed: -6
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  starting I/O failed: -6
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  starting I/O failed: -6
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  starting I/O failed: -6
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  starting I/O failed: -6
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  starting I/O failed: -6
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  starting I/O failed: -6
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  starting I/O failed: -6
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  starting I/O failed: -6
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  starting I/O failed: -6
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  starting I/O failed: -6
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  [2024-12-16 06:23:14.403722] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0e950 is same with the state(5) to be set
00:12:57.568  starting I/O failed: -6
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  starting I/O failed: -6
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  starting I/O failed: -6
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  starting I/O failed: -6
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  starting I/O failed: -6
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  starting I/O failed: -6
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  starting I/O failed: -6
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  starting I/O failed: -6
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Write completed with error (sct=0, sc=8)
00:12:57.568  starting I/O failed: -6
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.568  Read completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  starting I/O failed: -6
00:12:57.569  Write completed with error (sct=0, sc=8)
00:12:57.569  Write completed with error (sct=0, sc=8)
00:12:57.569  Write completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  starting I/O failed: -6
00:12:57.569  Write completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  [2024-12-16 06:23:14.406365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2694000c00 is same with the state(5) to be set
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Write completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Write completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Write completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Write completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Write completed with error (sct=0, sc=8)
00:12:57.569  Write completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Write completed with error (sct=0, sc=8)
00:12:57.569  Write completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Write completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Write completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Write completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Write completed with error (sct=0, sc=8)
00:12:57.569  Write completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:57.569  Read completed with error (sct=0, sc=8)
00:12:58.506  [2024-12-16 06:23:15.387009] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f5a0 is same with the state(5) to be set
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  [2024-12-16 06:23:15.404643] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0d7d0 is same with the state(5) to be set
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  [2024-12-16 06:23:15.405030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0dd30 is same with the state(5) to be set
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  [2024-12-16 06:23:15.406937] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f269400bf20 is same with the state(5) to be set
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.506  Write completed with error (sct=0, sc=8)
00:12:58.506  Read completed with error (sct=0, sc=8)
00:12:58.507  Write completed with error (sct=0, sc=8)
00:12:58.507  Read completed with error (sct=0, sc=8)
00:12:58.507  Read completed with error (sct=0, sc=8)
00:12:58.507  Read completed with error (sct=0, sc=8)
00:12:58.507  [2024-12-16 06:23:15.407185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f269400c600 is same with the state(5) to be set
00:12:58.507  [2024-12-16 06:23:15.408150] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f5a0 (9): Bad file descriptor
00:12:58.507  /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred
00:12:58.507   06:23:15	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:58.507   06:23:15	-- target/delete_subsystem.sh@34 -- # delay=0
00:12:58.507   06:23:15	-- target/delete_subsystem.sh@35 -- # kill -0 70483
00:12:58.507   06:23:15	-- target/delete_subsystem.sh@36 -- # sleep 0.5
00:12:58.507  Initializing NVMe Controllers
00:12:58.507  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:12:58.507  Controller IO queue size 128, less than required.
00:12:58.507  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:12:58.507  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2
00:12:58.507  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3
00:12:58.507  Initialization complete. Launching workers.
00:12:58.507  ========================================================
00:12:58.507                                                                                                               Latency(us)
00:12:58.507  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:12:58.507  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:     173.49       0.08  889842.31     435.44 1010027.20
00:12:58.507  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:     155.59       0.08  980214.81     391.54 2001780.80
00:12:58.507  ========================================================
00:12:58.507  Total                                                                    :     329.08       0.16  932571.30     391.54 2001780.80
00:12:58.507  
00:12:59.072   06:23:15	-- target/delete_subsystem.sh@38 -- # (( delay++ > 30 ))
00:12:59.072   06:23:15	-- target/delete_subsystem.sh@35 -- # kill -0 70483
00:12:59.072  /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (70483) - No such process
00:12:59.072   06:23:15	-- target/delete_subsystem.sh@45 -- # NOT wait 70483
00:12:59.072   06:23:15	-- common/autotest_common.sh@650 -- # local es=0
00:12:59.072   06:23:15	-- common/autotest_common.sh@652 -- # valid_exec_arg wait 70483
00:12:59.072   06:23:15	-- common/autotest_common.sh@638 -- # local arg=wait
00:12:59.072   06:23:15	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:12:59.072    06:23:15	-- common/autotest_common.sh@642 -- # type -t wait
00:12:59.072   06:23:15	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:12:59.072   06:23:15	-- common/autotest_common.sh@653 -- # wait 70483
00:12:59.072   06:23:15	-- common/autotest_common.sh@653 -- # es=1
00:12:59.072   06:23:15	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:12:59.072   06:23:15	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:12:59.072   06:23:15	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:12:59.072   06:23:15	-- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:12:59.072   06:23:15	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:59.072   06:23:15	-- common/autotest_common.sh@10 -- # set +x
00:12:59.072   06:23:15	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:59.072   06:23:15	-- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:12:59.072   06:23:15	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:59.072   06:23:15	-- common/autotest_common.sh@10 -- # set +x
00:12:59.072  [2024-12-16 06:23:15.933613] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:12:59.072   06:23:15	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:59.072   06:23:15	-- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:12:59.072   06:23:15	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:59.072   06:23:15	-- common/autotest_common.sh@10 -- # set +x
00:12:59.072   06:23:15	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:59.072   06:23:15	-- target/delete_subsystem.sh@54 -- # perf_pid=70530
00:12:59.072   06:23:15	-- target/delete_subsystem.sh@56 -- # delay=0
00:12:59.072   06:23:15	-- target/delete_subsystem.sh@57 -- # kill -0 70530
00:12:59.072   06:23:15	-- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4
00:12:59.072   06:23:15	-- target/delete_subsystem.sh@58 -- # sleep 0.5
00:12:59.331  [2024-12-16 06:23:16.113074] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:12:59.589   06:23:16	-- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:12:59.589   06:23:16	-- target/delete_subsystem.sh@57 -- # kill -0 70530
00:12:59.589   06:23:16	-- target/delete_subsystem.sh@58 -- # sleep 0.5
00:13:00.156   06:23:16	-- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:13:00.156   06:23:16	-- target/delete_subsystem.sh@57 -- # kill -0 70530
00:13:00.156   06:23:16	-- target/delete_subsystem.sh@58 -- # sleep 0.5
00:13:00.723   06:23:17	-- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:13:00.723   06:23:17	-- target/delete_subsystem.sh@57 -- # kill -0 70530
00:13:00.723   06:23:17	-- target/delete_subsystem.sh@58 -- # sleep 0.5
00:13:01.290   06:23:17	-- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:13:01.290   06:23:17	-- target/delete_subsystem.sh@57 -- # kill -0 70530
00:13:01.290   06:23:17	-- target/delete_subsystem.sh@58 -- # sleep 0.5
00:13:01.548   06:23:18	-- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:13:01.548   06:23:18	-- target/delete_subsystem.sh@57 -- # kill -0 70530
00:13:01.548   06:23:18	-- target/delete_subsystem.sh@58 -- # sleep 0.5
00:13:02.114   06:23:18	-- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:13:02.114   06:23:18	-- target/delete_subsystem.sh@57 -- # kill -0 70530
00:13:02.114   06:23:18	-- target/delete_subsystem.sh@58 -- # sleep 0.5
00:13:02.373  Initializing NVMe Controllers
00:13:02.373  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:13:02.373  Controller IO queue size 128, less than required.
00:13:02.373  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:13:02.373  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2
00:13:02.373  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3
00:13:02.373  Initialization complete. Launching workers.
00:13:02.373  ========================================================
00:13:02.373                                                                                                               Latency(us)
00:13:02.373  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:13:02.373  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:     128.00       0.06 1002655.26 1000162.32 1007319.63
00:13:02.373  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:     128.00       0.06 1004492.47 1000561.42 1010976.32
00:13:02.373  ========================================================
00:13:02.373  Total                                                                    :     256.00       0.12 1003573.86 1000162.32 1010976.32
00:13:02.373  
00:13:02.631   06:23:19	-- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:13:02.631   06:23:19	-- target/delete_subsystem.sh@57 -- # kill -0 70530
00:13:02.631  /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (70530) - No such process
00:13:02.631   06:23:19	-- target/delete_subsystem.sh@67 -- # wait 70530
00:13:02.631   06:23:19	-- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:13:02.631   06:23:19	-- target/delete_subsystem.sh@71 -- # nvmftestfini
00:13:02.631   06:23:19	-- nvmf/common.sh@476 -- # nvmfcleanup
00:13:02.631   06:23:19	-- nvmf/common.sh@116 -- # sync
00:13:02.631   06:23:19	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:13:02.631   06:23:19	-- nvmf/common.sh@119 -- # set +e
00:13:02.631   06:23:19	-- nvmf/common.sh@120 -- # for i in {1..20}
00:13:02.631   06:23:19	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:13:02.631  rmmod nvme_tcp
00:13:02.631  rmmod nvme_fabrics
00:13:02.631  rmmod nvme_keyring
00:13:02.631   06:23:19	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:13:02.631   06:23:19	-- nvmf/common.sh@123 -- # set -e
00:13:02.631   06:23:19	-- nvmf/common.sh@124 -- # return 0
00:13:02.631   06:23:19	-- nvmf/common.sh@477 -- # '[' -n 70432 ']'
00:13:02.631   06:23:19	-- nvmf/common.sh@478 -- # killprocess 70432
00:13:02.631   06:23:19	-- common/autotest_common.sh@936 -- # '[' -z 70432 ']'
00:13:02.631   06:23:19	-- common/autotest_common.sh@940 -- # kill -0 70432
00:13:02.631    06:23:19	-- common/autotest_common.sh@941 -- # uname
00:13:02.632   06:23:19	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:13:02.632    06:23:19	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70432
00:13:02.890  killing process with pid 70432
00:13:02.890   06:23:19	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:13:02.890   06:23:19	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:13:02.890   06:23:19	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 70432'
00:13:02.890   06:23:19	-- common/autotest_common.sh@955 -- # kill 70432
00:13:02.890   06:23:19	-- common/autotest_common.sh@960 -- # wait 70432
00:13:02.890   06:23:19	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:13:02.890   06:23:19	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:13:02.890   06:23:19	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:13:02.890   06:23:19	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:13:02.890   06:23:19	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:13:02.890   06:23:19	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:13:02.890   06:23:19	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:13:02.890    06:23:19	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:13:03.149   06:23:19	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:13:03.150  ************************************
00:13:03.150  END TEST nvmf_delete_subsystem
00:13:03.150  ************************************
00:13:03.150  
00:13:03.150  real	0m9.381s
00:13:03.150  user	0m28.859s
00:13:03.150  sys	0m1.489s
00:13:03.150   06:23:19	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:03.150   06:23:19	-- common/autotest_common.sh@10 -- # set +x
00:13:03.150   06:23:19	-- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]]
00:13:03.150   06:23:19	-- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]]
00:13:03.150   06:23:19	-- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp
00:13:03.150   06:23:19	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:13:03.150   06:23:19	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:03.150   06:23:19	-- common/autotest_common.sh@10 -- # set +x
00:13:03.150  ************************************
00:13:03.150  START TEST nvmf_vfio_user
00:13:03.150  ************************************
00:13:03.150   06:23:19	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp
00:13:03.150  * Looking for test storage...
00:13:03.150  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:13:03.150    06:23:20	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:13:03.150     06:23:20	-- common/autotest_common.sh@1690 -- # lcov --version
00:13:03.150     06:23:20	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:13:03.150    06:23:20	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:13:03.150    06:23:20	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:13:03.150    06:23:20	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:13:03.150    06:23:20	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:13:03.150    06:23:20	-- scripts/common.sh@335 -- # IFS=.-:
00:13:03.150    06:23:20	-- scripts/common.sh@335 -- # read -ra ver1
00:13:03.150    06:23:20	-- scripts/common.sh@336 -- # IFS=.-:
00:13:03.150    06:23:20	-- scripts/common.sh@336 -- # read -ra ver2
00:13:03.150    06:23:20	-- scripts/common.sh@337 -- # local 'op=<'
00:13:03.150    06:23:20	-- scripts/common.sh@339 -- # ver1_l=2
00:13:03.150    06:23:20	-- scripts/common.sh@340 -- # ver2_l=1
00:13:03.150    06:23:20	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:13:03.150    06:23:20	-- scripts/common.sh@343 -- # case "$op" in
00:13:03.150    06:23:20	-- scripts/common.sh@344 -- # : 1
00:13:03.150    06:23:20	-- scripts/common.sh@363 -- # (( v = 0 ))
00:13:03.150    06:23:20	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:03.150     06:23:20	-- scripts/common.sh@364 -- # decimal 1
00:13:03.150     06:23:20	-- scripts/common.sh@352 -- # local d=1
00:13:03.150     06:23:20	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:03.150     06:23:20	-- scripts/common.sh@354 -- # echo 1
00:13:03.150    06:23:20	-- scripts/common.sh@364 -- # ver1[v]=1
00:13:03.150     06:23:20	-- scripts/common.sh@365 -- # decimal 2
00:13:03.150     06:23:20	-- scripts/common.sh@352 -- # local d=2
00:13:03.150     06:23:20	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:03.150     06:23:20	-- scripts/common.sh@354 -- # echo 2
00:13:03.150    06:23:20	-- scripts/common.sh@365 -- # ver2[v]=2
00:13:03.150    06:23:20	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:13:03.150    06:23:20	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:13:03.150    06:23:20	-- scripts/common.sh@367 -- # return 0
00:13:03.150    06:23:20	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:03.150    06:23:20	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:13:03.150  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:03.150  		--rc genhtml_branch_coverage=1
00:13:03.150  		--rc genhtml_function_coverage=1
00:13:03.150  		--rc genhtml_legend=1
00:13:03.150  		--rc geninfo_all_blocks=1
00:13:03.150  		--rc geninfo_unexecuted_blocks=1
00:13:03.150  		
00:13:03.150  		'
00:13:03.150    06:23:20	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:13:03.150  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:03.150  		--rc genhtml_branch_coverage=1
00:13:03.150  		--rc genhtml_function_coverage=1
00:13:03.150  		--rc genhtml_legend=1
00:13:03.150  		--rc geninfo_all_blocks=1
00:13:03.150  		--rc geninfo_unexecuted_blocks=1
00:13:03.150  		
00:13:03.150  		'
00:13:03.150    06:23:20	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:13:03.150  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:03.150  		--rc genhtml_branch_coverage=1
00:13:03.150  		--rc genhtml_function_coverage=1
00:13:03.150  		--rc genhtml_legend=1
00:13:03.150  		--rc geninfo_all_blocks=1
00:13:03.150  		--rc geninfo_unexecuted_blocks=1
00:13:03.150  		
00:13:03.150  		'
00:13:03.150    06:23:20	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:13:03.150  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:03.150  		--rc genhtml_branch_coverage=1
00:13:03.150  		--rc genhtml_function_coverage=1
00:13:03.150  		--rc genhtml_legend=1
00:13:03.150  		--rc geninfo_all_blocks=1
00:13:03.150  		--rc geninfo_unexecuted_blocks=1
00:13:03.150  		
00:13:03.150  		'
00:13:03.150   06:23:20	-- target/nvmf_vfio_user.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:13:03.150     06:23:20	-- nvmf/common.sh@7 -- # uname -s
00:13:03.150    06:23:20	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:13:03.150    06:23:20	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:13:03.150    06:23:20	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:13:03.150    06:23:20	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:13:03.150    06:23:20	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:13:03.150    06:23:20	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:13:03.150    06:23:20	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:13:03.150    06:23:20	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:13:03.150    06:23:20	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:13:03.150     06:23:20	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:13:03.150    06:23:20	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:13:03.150    06:23:20	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:13:03.150    06:23:20	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:13:03.150    06:23:20	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:13:03.150    06:23:20	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:13:03.150    06:23:20	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:13:03.150     06:23:20	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:13:03.150     06:23:20	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:13:03.150     06:23:20	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:13:03.150      06:23:20	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:03.150      06:23:20	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:03.150      06:23:20	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:03.150      06:23:20	-- paths/export.sh@5 -- # export PATH
00:13:03.150      06:23:20	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:03.150    06:23:20	-- nvmf/common.sh@46 -- # : 0
00:13:03.150    06:23:20	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:13:03.150    06:23:20	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:13:03.150    06:23:20	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:13:03.150    06:23:20	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:13:03.409    06:23:20	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:13:03.409    06:23:20	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:13:03.409    06:23:20	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:13:03.409    06:23:20	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:13:03.409   06:23:20	-- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64
00:13:03.409   06:23:20	-- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512
00:13:03.409   06:23:20	-- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2
00:13:03.409   06:23:20	-- target/nvmf_vfio_user.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:13:03.409   06:23:20	-- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER
00:13:03.409   06:23:20	-- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER
00:13:03.409   06:23:20	-- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user
00:13:03.409  Process pid: 70661
00:13:03.409  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:03.409   06:23:20	-- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' ''
00:13:03.409   06:23:20	-- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=
00:13:03.409   06:23:20	-- target/nvmf_vfio_user.sh@52 -- # local transport_args=
00:13:03.409   06:23:20	-- target/nvmf_vfio_user.sh@55 -- # nvmfpid=70661
00:13:03.409   06:23:20	-- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 70661'
00:13:03.409   06:23:20	-- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT
00:13:03.409   06:23:20	-- target/nvmf_vfio_user.sh@60 -- # waitforlisten 70661
00:13:03.409   06:23:20	-- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]'
00:13:03.409   06:23:20	-- common/autotest_common.sh@829 -- # '[' -z 70661 ']'
00:13:03.409   06:23:20	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:03.409   06:23:20	-- common/autotest_common.sh@834 -- # local max_retries=100
00:13:03.409   06:23:20	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:03.409   06:23:20	-- common/autotest_common.sh@838 -- # xtrace_disable
00:13:03.409   06:23:20	-- common/autotest_common.sh@10 -- # set +x
00:13:03.409  [2024-12-16 06:23:20.194245] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:13:03.409  [2024-12-16 06:23:20.194350] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:13:03.409  [2024-12-16 06:23:20.327490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:13:03.668  [2024-12-16 06:23:20.407253] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:13:03.668  [2024-12-16 06:23:20.407380] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:13:03.668  [2024-12-16 06:23:20.407390] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:13:03.668  [2024-12-16 06:23:20.407398] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:13:03.668  [2024-12-16 06:23:20.407809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:13:03.668  [2024-12-16 06:23:20.407911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:13:03.668  [2024-12-16 06:23:20.408241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:13:03.668  [2024-12-16 06:23:20.408297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:13:04.233   06:23:21	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:13:04.233   06:23:21	-- common/autotest_common.sh@862 -- # return 0
00:13:04.233   06:23:21	-- target/nvmf_vfio_user.sh@62 -- # sleep 1
00:13:05.168   06:23:22	-- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER
00:13:05.734   06:23:22	-- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user
00:13:05.734    06:23:22	-- target/nvmf_vfio_user.sh@68 -- # seq 1 2
00:13:05.734   06:23:22	-- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES)
00:13:05.734   06:23:22	-- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1
00:13:05.734   06:23:22	-- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1
00:13:05.734  Malloc1
00:13:05.992   06:23:22	-- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1
00:13:05.992   06:23:22	-- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1
00:13:06.250   06:23:23	-- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0
00:13:06.513   06:23:23	-- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES)
00:13:06.513   06:23:23	-- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2
00:13:06.513   06:23:23	-- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2
00:13:06.788  Malloc2
00:13:06.788   06:23:23	-- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2
00:13:07.052   06:23:23	-- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2
00:13:07.310   06:23:24	-- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0
00:13:07.568   06:23:24	-- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user
00:13:07.568    06:23:24	-- target/nvmf_vfio_user.sh@80 -- # seq 1 2
00:13:07.568   06:23:24	-- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES)
00:13:07.568   06:23:24	-- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1
00:13:07.568   06:23:24	-- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1
00:13:07.568   06:23:24	-- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci
00:13:07.568  [2024-12-16 06:23:24.457649] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:13:07.568  [2024-12-16 06:23:24.457711] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70797 ]
00:13:07.828  [2024-12-16 06:23:24.595069] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1
00:13:07.828  [2024-12-16 06:23:24.607961] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32
00:13:07.828  [2024-12-16 06:23:24.608008] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa2606ed000
00:13:07.828  [2024-12-16 06:23:24.608954] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:07.828  [2024-12-16 06:23:24.609945] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:07.828  [2024-12-16 06:23:24.610952] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:07.828  [2024-12-16 06:23:24.611960] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0
00:13:07.828  [2024-12-16 06:23:24.612951] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:13:07.828  [2024-12-16 06:23:24.613966] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:07.828  [2024-12-16 06:23:24.614989] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:13:07.828  [2024-12-16 06:23:24.615982] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:07.828  [2024-12-16 06:23:24.616991] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32
00:13:07.828  [2024-12-16 06:23:24.617020] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa25fd03000
00:13:07.828  [2024-12-16 06:23:24.618146] vfio_user_pci.c:  65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000
00:13:07.828  [2024-12-16 06:23:24.633440] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully
00:13:07.828  [2024-12-16 06:23:24.633541] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout)
00:13:07.828  [2024-12-16 06:23:24.638161] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff
00:13:07.828  [2024-12-16 06:23:24.638258] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192
00:13:07.828  [2024-12-16 06:23:24.638393] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout)
00:13:07.828  [2024-12-16 06:23:24.638425] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout)
00:13:07.829  [2024-12-16 06:23:24.638433] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout)
00:13:07.829  [2024-12-16 06:23:24.639157] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300
00:13:07.829  [2024-12-16 06:23:24.639197] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout)
00:13:07.829  [2024-12-16 06:23:24.639220] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout)
00:13:07.829  [2024-12-16 06:23:24.640159] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff
00:13:07.829  [2024-12-16 06:23:24.640185] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout)
00:13:07.829  [2024-12-16 06:23:24.640198] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms)
00:13:07.829  [2024-12-16 06:23:24.641163] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0
00:13:07.829  [2024-12-16 06:23:24.641188] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms)
00:13:07.829  [2024-12-16 06:23:24.642170] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0
00:13:07.829  [2024-12-16 06:23:24.642219] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0
00:13:07.829  [2024-12-16 06:23:24.642228] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms)
00:13:07.829  [2024-12-16 06:23:24.642238] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms)
00:13:07.829  [2024-12-16 06:23:24.642346] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1
00:13:07.829  [2024-12-16 06:23:24.642354] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms)
00:13:07.829  [2024-12-16 06:23:24.642360] nvme_vfio_user.c:  61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000
00:13:07.829  [2024-12-16 06:23:24.643181] nvme_vfio_user.c:  61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000
00:13:07.829  [2024-12-16 06:23:24.644180] nvme_vfio_user.c:  49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff
00:13:07.829  [2024-12-16 06:23:24.645204] nvme_vfio_user.c:  49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001
00:13:07.829  [2024-12-16 06:23:24.646266] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms)
00:13:07.829  [2024-12-16 06:23:24.647224] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1
00:13:07.829  [2024-12-16 06:23:24.647249] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready
00:13:07.829  [2024-12-16 06:23:24.647264] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms)
00:13:07.829  [2024-12-16 06:23:24.647285] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout)
00:13:07.829  [2024-12-16 06:23:24.647302] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms)
00:13:07.829  [2024-12-16 06:23:24.647324] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096
00:13:07.829  [2024-12-16 06:23:24.647332] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:13:07.829  [2024-12-16 06:23:24.647350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:13:07.829  [2024-12-16 06:23:24.647425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0
00:13:07.829  [2024-12-16 06:23:24.647438] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072
00:13:07.829  [2024-12-16 06:23:24.647444] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072
00:13:07.829  [2024-12-16 06:23:24.647448] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001
00:13:07.829  [2024-12-16 06:23:24.647454] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000
00:13:07.829  [2024-12-16 06:23:24.647459] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1
00:13:07.829  [2024-12-16 06:23:24.647464] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1
00:13:07.829  [2024-12-16 06:23:24.647469] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms)
00:13:07.829  [2024-12-16 06:23:24.647498] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms)
00:13:07.829  [2024-12-16 06:23:24.647525] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0
00:13:07.829  [2024-12-16 06:23:24.647543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0
00:13:07.829  [2024-12-16 06:23:24.647561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:07.829  [2024-12-16 06:23:24.647572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:07.829  [2024-12-16 06:23:24.647582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:07.829  [2024-12-16 06:23:24.647591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:07.829  [2024-12-16 06:23:24.647597] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms)
00:13:07.829  [2024-12-16 06:23:24.647613] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms)
00:13:07.829  [2024-12-16 06:23:24.647624] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0
00:13:07.829  [2024-12-16 06:23:24.647641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0
00:13:07.829  [2024-12-16 06:23:24.647648] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms
00:13:07.829  [2024-12-16 06:23:24.647654] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms)
00:13:07.829  [2024-12-16 06:23:24.647662] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms)
00:13:07.829  [2024-12-16 06:23:24.647673] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms)
00:13:07.829  [2024-12-16 06:23:24.647685] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:13:07.829  [2024-12-16 06:23:24.647699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0
00:13:07.829  [2024-12-16 06:23:24.647772] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms)
00:13:07.829  [2024-12-16 06:23:24.647785] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms)
00:13:07.829  [2024-12-16 06:23:24.647813] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096
00:13:07.829  [2024-12-16 06:23:24.647835] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000
00:13:07.829  [2024-12-16 06:23:24.647844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0
00:13:07.829  [2024-12-16 06:23:24.647865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0
00:13:07.829  [2024-12-16 06:23:24.647885] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added
00:13:07.829  [2024-12-16 06:23:24.647899] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms)
00:13:07.829  [2024-12-16 06:23:24.647913] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms)
00:13:07.829  [2024-12-16 06:23:24.647932] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096
00:13:07.829  [2024-12-16 06:23:24.647938] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:13:07.829  [2024-12-16 06:23:24.647946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:13:07.829  [2024-12-16 06:23:24.647972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0
00:13:07.829  [2024-12-16 06:23:24.647993] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms)
00:13:07.829  [2024-12-16 06:23:24.648006] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms)
00:13:07.829  [2024-12-16 06:23:24.648016] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096
00:13:07.829  [2024-12-16 06:23:24.648022] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:13:07.829  [2024-12-16 06:23:24.648030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:13:07.829  [2024-12-16 06:23:24.648048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0
00:13:07.829  [2024-12-16 06:23:24.648059] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms)
00:13:07.829  [2024-12-16 06:23:24.648069] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms)
00:13:07.829  [2024-12-16 06:23:24.648082] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms)
00:13:07.829  [2024-12-16 06:23:24.648090] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms)
00:13:07.829  [2024-12-16 06:23:24.648097] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms)
00:13:07.829  [2024-12-16 06:23:24.648104] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID
00:13:07.829  [2024-12-16 06:23:24.648110] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms)
00:13:07.829  [2024-12-16 06:23:24.648116] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout)
00:13:07.829  [2024-12-16 06:23:24.648141] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0
00:13:07.830  [2024-12-16 06:23:24.648158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0
00:13:07.830  [2024-12-16 06:23:24.648205] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0
00:13:07.830  [2024-12-16 06:23:24.648238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0
00:13:07.830  [2024-12-16 06:23:24.648252] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0
00:13:07.830  [2024-12-16 06:23:24.648264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0
00:13:07.830  [2024-12-16 06:23:24.648278] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:13:07.830  [2024-12-16 06:23:24.648286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0
00:13:07.830  [2024-12-16 06:23:24.648302] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192
00:13:07.830  [2024-12-16 06:23:24.648308] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000
00:13:07.830  [2024-12-16 06:23:24.648312] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000
00:13:07.830  [2024-12-16 06:23:24.648316] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000
00:13:07.830  [2024-12-16 06:23:24.648323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000
00:13:07.830  [2024-12-16 06:23:24.648332] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512
00:13:07.830  [2024-12-16 06:23:24.648337] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000
00:13:07.830  [2024-12-16 06:23:24.648344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0
00:13:07.830  [2024-12-16 06:23:24.648353] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512
00:13:07.830  [2024-12-16 06:23:24.648359] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:13:07.830  [2024-12-16 06:23:24.648365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:13:07.830  =====================================================
00:13:07.830  NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1
00:13:07.830  =====================================================
00:13:07.830  Controller Capabilities/Features
00:13:07.830  ================================
00:13:07.830  Vendor ID:                             4e58
00:13:07.830  Subsystem Vendor ID:                   4e58
00:13:07.830  Serial Number:                         SPDK1
00:13:07.830  Model Number:                          SPDK bdev Controller
00:13:07.830  Firmware Version:                      24.01.1
00:13:07.830  Recommended Arb Burst:                 6
00:13:07.830  IEEE OUI Identifier:                   8d 6b 50
00:13:07.830  Multi-path I/O
00:13:07.830    May have multiple subsystem ports:   Yes
00:13:07.830    May have multiple controllers:       Yes
00:13:07.830    Associated with SR-IOV VF:           No
00:13:07.830  Max Data Transfer Size:                131072
00:13:07.830  Max Number of Namespaces:              32
00:13:07.830  Max Number of I/O Queues:              127
00:13:07.830  NVMe Specification Version (VS):       1.3
00:13:07.830  NVMe Specification Version (Identify): 1.3
00:13:07.830  Maximum Queue Entries:                 256
00:13:07.830  Contiguous Queues Required:            Yes
00:13:07.830  Arbitration Mechanisms Supported
00:13:07.830    Weighted Round Robin:                Not Supported
00:13:07.830    Vendor Specific:                     Not Supported
00:13:07.830  Reset Timeout:                         15000 ms
00:13:07.830  Doorbell Stride:                       4 bytes
00:13:07.830  NVM Subsystem Reset:                   Not Supported
00:13:07.830  Command Sets Supported
00:13:07.830    NVM Command Set:                     Supported
00:13:07.830  Boot Partition:                        Not Supported
00:13:07.830  Memory Page Size Minimum:              4096 bytes
00:13:07.830  Memory Page Size Maximum:              4096 bytes
00:13:07.830  Persistent Memory Region:              Not Supported
00:13:07.830  Optional Asynchronous Events Supported
00:13:07.830    Namespace Attribute Notices:         Supported
00:13:07.830    Firmware Activation Notices:         Not Supported
00:13:07.830    ANA Change Notices:                  Not Supported
00:13:07.830    PLE Aggregate Log Change Notices:    Not Supported
00:13:07.830    LBA Status Info Alert Notices:       Not Supported
00:13:07.830    EGE Aggregate Log Change Notices:    Not Supported
00:13:07.830    Normal NVM Subsystem Shutdown event: Not Supported
00:13:07.830    Zone Descriptor Change Notices:      Not Supported
00:13:07.830    Discovery Log Change Notices:        Not Supported
00:13:07.830  Controller Attributes
00:13:07.830    128-bit Host Identifier:             Supported
00:13:07.830    Non-Operational Permissive Mode:     Not Supported
00:13:07.830    NVM Sets:                            Not Supported
00:13:07.830    Read Recovery Levels:                Not Supported
00:13:07.830    Endurance Groups:                    Not Supported
00:13:07.830    Predictable Latency Mode:            Not Supported
00:13:07.830    Traffic Based Keep ALive:            Not Supported
00:13:07.830    Namespace Granularity:               Not Supported
00:13:07.830    SQ Associations:                     Not Supported
00:13:07.830    UUID List:                           Not Supported
00:13:07.830    Multi-Domain Subsystem:              Not Supported
00:13:07.830    Fixed Capacity Management:           Not Supported
00:13:07.830    Variable Capacity Management:        Not Supported
00:13:07.830    Delete Endurance Group:              Not Supported
00:13:07.830    Delete NVM Set:                      Not Supported
00:13:07.830    Extended LBA Formats Supported:      Not Supported
00:13:07.830    Flexible Data Placement Supported:   Not Supported
00:13:07.830  
00:13:07.830  Controller Memory Buffer Support
00:13:07.830  ================================
00:13:07.830  Supported:                             No
00:13:07.830  
00:13:07.830  Persistent Memory Region Support
00:13:07.830  ================================
00:13:07.830  Supported:                             No
00:13:07.830  
00:13:07.830  Admin Command Set Attributes
00:13:07.830  ============================
00:13:07.830  Security Send/Receive:                 Not Supported
00:13:07.830  Format NVM:                            Not Supported
00:13:07.830  Firmware Activate/Download:            Not Supported
00:13:07.830  Namespace Management:                  Not Supported
00:13:07.830  Device Self-Test:                      Not Supported
00:13:07.830  Directives:                            Not Supported
00:13:07.830  NVMe-MI:                               Not Supported
00:13:07.830  Virtualization Management:             Not Supported
00:13:07.830  Doorbell Buffer Config:                Not Supported
00:13:07.830  Get LBA Status Capability:             Not Supported
00:13:07.830  Command & Feature Lockdown Capability: Not Supported
00:13:07.830  Abort Command Limit:                   4
00:13:07.830  Async Event Request Limit:             4
00:13:07.830  Number of Firmware Slots:              N/A
00:13:07.830  Firmware Slot 1 Read-Only:             N/A
00:13:07.830  Firmware Activation Wit[2024-12-16 06:23:24.648382] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096
00:13:07.830  [2024-12-16 06:23:24.648390] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000
00:13:07.830  [2024-12-16 06:23:24.648397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0
00:13:07.830  [2024-12-16 06:23:24.648405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0
00:13:07.830  [2024-12-16 06:23:24.648426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0
00:13:07.830  [2024-12-16 06:23:24.648439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0
00:13:07.830  [2024-12-16 06:23:24.648464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0
00:13:07.830  hout Reset:     N/A
00:13:07.830  Multiple Update Detection Support:     N/A
00:13:07.830  Firmware Update Granularity:           No Information Provided
00:13:07.830  Per-Namespace SMART Log:               No
00:13:07.830  Asymmetric Namespace Access Log Page:  Not Supported
00:13:07.830  Subsystem NQN:                         nqn.2019-07.io.spdk:cnode1
00:13:07.830  Command Effects Log Page:              Supported
00:13:07.830  Get Log Page Extended Data:            Supported
00:13:07.830  Telemetry Log Pages:                   Not Supported
00:13:07.830  Persistent Event Log Pages:            Not Supported
00:13:07.830  Supported Log Pages Log Page:          May Support
00:13:07.830  Commands Supported & Effects Log Page: Not Supported
00:13:07.830  Feature Identifiers & Effects Log Page:May Support
00:13:07.830  NVMe-MI Commands & Effects Log Page:   May Support
00:13:07.830  Data Area 4 for Telemetry Log:         Not Supported
00:13:07.830  Error Log Page Entries Supported:      128
00:13:07.830  Keep Alive:                            Supported
00:13:07.830  Keep Alive Granularity:                10000 ms
00:13:07.830  
00:13:07.830  NVM Command Set Attributes
00:13:07.830  ==========================
00:13:07.830  Submission Queue Entry Size
00:13:07.830    Max:                       64
00:13:07.830    Min:                       64
00:13:07.830  Completion Queue Entry Size
00:13:07.831    Max:                       16
00:13:07.831    Min:                       16
00:13:07.831  Number of Namespaces:        32
00:13:07.831  Compare Command:             Supported
00:13:07.831  Write Uncorrectable Command: Not Supported
00:13:07.831  Dataset Management Command:  Supported
00:13:07.831  Write Zeroes Command:        Supported
00:13:07.831  Set Features Save Field:     Not Supported
00:13:07.831  Reservations:                Not Supported
00:13:07.831  Timestamp:                   Not Supported
00:13:07.831  Copy:                        Supported
00:13:07.831  Volatile Write Cache:        Present
00:13:07.831  Atomic Write Unit (Normal):  1
00:13:07.831  Atomic Write Unit (PFail):   1
00:13:07.831  Atomic Compare & Write Unit: 1
00:13:07.831  Fused Compare & Write:       Supported
00:13:07.831  Scatter-Gather List
00:13:07.831    SGL Command Set:           Supported (Dword aligned)
00:13:07.831    SGL Keyed:                 Not Supported
00:13:07.831    SGL Bit Bucket Descriptor: Not Supported
00:13:07.831    SGL Metadata Pointer:      Not Supported
00:13:07.831    Oversized SGL:             Not Supported
00:13:07.831    SGL Metadata Address:      Not Supported
00:13:07.831    SGL Offset:                Not Supported
00:13:07.831    Transport SGL Data Block:  Not Supported
00:13:07.831  Replay Protected Memory Block:  Not Supported
00:13:07.831  
00:13:07.831  Firmware Slot Information
00:13:07.831  =========================
00:13:07.831  Active slot:                 1
00:13:07.831  Slot 1 Firmware Revision:    24.01.1
00:13:07.831  
00:13:07.831  
00:13:07.831  Commands Supported and Effects
00:13:07.831  ==============================
00:13:07.831  Admin Commands
00:13:07.831  --------------
00:13:07.831                    Get Log Page (02h): Supported 
00:13:07.831                        Identify (06h): Supported 
00:13:07.831                           Abort (08h): Supported 
00:13:07.831                    Set Features (09h): Supported 
00:13:07.831                    Get Features (0Ah): Supported 
00:13:07.831      Asynchronous Event Request (0Ch): Supported 
00:13:07.831                      Keep Alive (18h): Supported 
00:13:07.831  I/O Commands
00:13:07.831  ------------
00:13:07.831                           Flush (00h): Supported LBA-Change 
00:13:07.831                           Write (01h): Supported LBA-Change 
00:13:07.831                            Read (02h): Supported 
00:13:07.831                         Compare (05h): Supported 
00:13:07.831                    Write Zeroes (08h): Supported LBA-Change 
00:13:07.831              Dataset Management (09h): Supported LBA-Change 
00:13:07.831                            Copy (19h): Supported LBA-Change 
00:13:07.831                         Unknown (79h): Supported LBA-Change 
00:13:07.831                         Unknown (7Ah): Supported 
00:13:07.831  
00:13:07.831  Error Log
00:13:07.831  =========
00:13:07.831  
00:13:07.831  Arbitration
00:13:07.831  ===========
00:13:07.831  Arbitration Burst:           1
00:13:07.831  
00:13:07.831  Power Management
00:13:07.831  ================
00:13:07.831  Number of Power States:          1
00:13:07.831  Current Power State:             Power State #0
00:13:07.831  Power State #0:
00:13:07.831    Max Power:                      0.00 W
00:13:07.831    Non-Operational State:         Operational
00:13:07.831    Entry Latency:                 Not Reported
00:13:07.831    Exit Latency:                  Not Reported
00:13:07.831    Relative Read Throughput:      0
00:13:07.831    Relative Read Latency:         0
00:13:07.831    Relative Write Throughput:     0
00:13:07.831    Relative Write Latency:        0
00:13:07.831    Idle Power:                     Not Reported
00:13:07.831    Active Power:                   Not Reported
00:13:07.831  Non-Operational Permissive Mode: Not Supported
00:13:07.831  
00:13:07.831  Health Information
00:13:07.831  ==================
00:13:07.831  Critical Warnings:
00:13:07.831    Available Spare Space:     OK
00:13:07.831    Temperature:               OK
00:13:07.831    Device Reliability:        OK
00:13:07.831    Read Only:                 No
00:13:07.831    Volatile Memory Backup:    OK
00:13:07.831  Current Temperature:         0 Kelvin[2024-12-16 06:23:24.648610] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0
00:13:07.831  [2024-12-16 06:23:24.648625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0
00:13:07.831  [2024-12-16 06:23:24.648660] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD
00:13:07.831  [2024-12-16 06:23:24.648674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:07.831  [2024-12-16 06:23:24.648682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:07.831  [2024-12-16 06:23:24.648689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:07.831  [2024-12-16 06:23:24.648696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:07.831  [2024-12-16 06:23:24.652562] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001
00:13:07.831  [2024-12-16 06:23:24.652605] nvme_vfio_user.c:  49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001
00:13:07.831  [2024-12-16 06:23:24.653338] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us
00:13:07.831  [2024-12-16 06:23:24.653357] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms
00:13:07.831  [2024-12-16 06:23:24.654288] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9
00:13:07.831  [2024-12-16 06:23:24.654336] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds
00:13:07.831  [2024-12-16 06:23:24.654409] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl
00:13:07.831  [2024-12-16 06:23:24.656336] vfio_user_pci.c:  96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000
00:13:07.831   (-273 Celsius)
00:13:07.831  Temperature Threshold:       0 Kelvin (-273 Celsius)
00:13:07.831  Available Spare:             0%
00:13:07.831  Available Spare Threshold:   0%
00:13:07.831  Life Percentage Used:        0%
00:13:07.831  Data Units Read:             0
00:13:07.831  Data Units Written:          0
00:13:07.831  Host Read Commands:          0
00:13:07.831  Host Write Commands:         0
00:13:07.831  Controller Busy Time:        0 minutes
00:13:07.831  Power Cycles:                0
00:13:07.831  Power On Hours:              0 hours
00:13:07.831  Unsafe Shutdowns:            0
00:13:07.831  Unrecoverable Media Errors:  0
00:13:07.831  Lifetime Error Log Entries:  0
00:13:07.831  Warning Temperature Time:    0 minutes
00:13:07.831  Critical Temperature Time:   0 minutes
00:13:07.831  
00:13:07.831  Number of Queues
00:13:07.831  ================
00:13:07.831  Number of I/O Submission Queues:      127
00:13:07.831  Number of I/O Completion Queues:      127
00:13:07.831  
00:13:07.831  Active Namespaces
00:13:07.831  =================
00:13:07.831  Namespace ID:1
00:13:07.831  Error Recovery Timeout:                Unlimited
00:13:07.831  Command Set Identifier:                NVM (00h)
00:13:07.831  Deallocate:                            Supported
00:13:07.831  Deallocated/Unwritten Error:           Not Supported
00:13:07.831  Deallocated Read Value:                Unknown
00:13:07.831  Deallocate in Write Zeroes:            Not Supported
00:13:07.831  Deallocated Guard Field:               0xFFFF
00:13:07.831  Flush:                                 Supported
00:13:07.831  Reservation:                           Supported
00:13:07.831  Namespace Sharing Capabilities:        Multiple Controllers
00:13:07.831  Size (in LBAs):                        131072 (0GiB)
00:13:07.831  Capacity (in LBAs):                    131072 (0GiB)
00:13:07.831  Utilization (in LBAs):                 131072 (0GiB)
00:13:07.831  NGUID:                                 8FF1D1354DCE473EBA170C04A13C8B91
00:13:07.831  UUID:                                  8ff1d135-4dce-473e-ba17-0c04a13c8b91
00:13:07.831  Thin Provisioning:                     Not Supported
00:13:07.831  Per-NS Atomic Units:                   Yes
00:13:07.831    Atomic Boundary Size (Normal):       0
00:13:07.831    Atomic Boundary Size (PFail):        0
00:13:07.831    Atomic Boundary Offset:              0
00:13:07.831  Maximum Single Source Range Length:    65535
00:13:07.831  Maximum Copy Length:                   65535
00:13:07.831  Maximum Source Range Count:            1
00:13:07.831  NGUID/EUI64 Never Reused:              No
00:13:07.831  Namespace Write Protected:             No
00:13:07.831  Number of LBA Formats:                 1
00:13:07.831  Current LBA Format:                    LBA Format #00
00:13:07.831  LBA Format #00: Data Size:   512  Metadata Size:     0
00:13:07.831  
00:13:07.831   06:23:24	-- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2
00:13:13.096  Initializing NVMe Controllers
00:13:13.096  Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1
00:13:13.096  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1
00:13:13.096  Initialization complete. Launching workers.
00:13:13.096  ========================================================
00:13:13.096                                                                                                           Latency(us)
00:13:13.096  Device Information                                                   :       IOPS      MiB/s    Average        min        max
00:13:13.096  VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core  1:   37774.67     147.56    3388.00    1036.21   10470.64
00:13:13.096  ========================================================
00:13:13.096  Total                                                                :   37774.67     147.56    3388.00    1036.21   10470.64
00:13:13.096  
00:13:13.096   06:23:30	-- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2
00:13:19.653  Initializing NVMe Controllers
00:13:19.653  Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1
00:13:19.653  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1
00:13:19.653  Initialization complete. Launching workers.
00:13:19.653  ========================================================
00:13:19.653                                                                                                           Latency(us)
00:13:19.653  Device Information                                                   :       IOPS      MiB/s    Average        min        max
00:13:19.653  VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core  1:   16037.12      62.64    7986.76    6031.33   16782.62
00:13:19.653  ========================================================
00:13:19.653  Total                                                                :   16037.12      62.64    7986.76    6031.33   16782.62
00:13:19.653  
00:13:19.653   06:23:35	-- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE
00:13:23.847  Initializing NVMe Controllers
00:13:23.847  Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1
00:13:23.847  Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1
00:13:23.847  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1
00:13:23.847  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2
00:13:23.847  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3
00:13:23.847  Initialization complete. Launching workers.
00:13:23.847  Starting thread on core 2
00:13:23.847  Starting thread on core 3
00:13:23.847  Starting thread on core 1
00:13:23.847   06:23:40	-- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g
00:13:27.132  Initializing NVMe Controllers
00:13:27.132  Attaching to /var/run/vfio-user/domain/vfio-user1/1
00:13:27.132  Attached to /var/run/vfio-user/domain/vfio-user1/1
00:13:27.132  Associating SPDK bdev Controller (SPDK1               ) with lcore 0
00:13:27.132  Associating SPDK bdev Controller (SPDK1               ) with lcore 1
00:13:27.132  Associating SPDK bdev Controller (SPDK1               ) with lcore 2
00:13:27.132  Associating SPDK bdev Controller (SPDK1               ) with lcore 3
00:13:27.132  /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration:
00:13:27.132  /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1
00:13:27.132  Initialization complete. Launching workers.
00:13:27.132  Starting thread on core 1 with urgent priority queue
00:13:27.132  Starting thread on core 2 with urgent priority queue
00:13:27.132  Starting thread on core 3 with urgent priority queue
00:13:27.132  Starting thread on core 0 with urgent priority queue
00:13:27.132  SPDK bdev Controller (SPDK1               ) core 0:  5923.33 IO/s    16.88 secs/100000 ios
00:13:27.132  SPDK bdev Controller (SPDK1               ) core 1:  6466.67 IO/s    15.46 secs/100000 ios
00:13:27.132  SPDK bdev Controller (SPDK1               ) core 2:  5842.33 IO/s    17.12 secs/100000 ios
00:13:27.132  SPDK bdev Controller (SPDK1               ) core 3:  6637.67 IO/s    15.07 secs/100000 ios
00:13:27.132  ========================================================
00:13:27.132  
00:13:27.132   06:23:44	-- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1'
00:13:27.699  Initializing NVMe Controllers
00:13:27.699  Attaching to /var/run/vfio-user/domain/vfio-user1/1
00:13:27.699  Attached to /var/run/vfio-user/domain/vfio-user1/1
00:13:27.699    Namespace ID: 1 size: 0GB
00:13:27.699  Initialization complete.
00:13:27.699  INFO: using host memory buffer for IO
00:13:27.699  Hello world!
00:13:27.699   06:23:44	-- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1'
00:13:29.071  Initializing NVMe Controllers
00:13:29.071  Attaching to /var/run/vfio-user/domain/vfio-user1/1
00:13:29.071  Attached to /var/run/vfio-user/domain/vfio-user1/1
00:13:29.071  Initialization complete. Launching workers.
00:13:29.071  submit (in ns)   avg, min, max =   8165.0,   3248.2, 4086373.6
00:13:29.071  complete (in ns) avg, min, max =  22012.4,   1880.9, 4459733.6
00:13:29.071  
00:13:29.071  Submit histogram
00:13:29.071  ================
00:13:29.071         Range in us     Cumulative     Count
00:13:29.071      3.244 -     3.258:    0.4133%  (       61)
00:13:29.071      3.258 -     3.273:    2.6353%  (      328)
00:13:29.071      3.273 -     3.287:   12.8176%  (     1503)
00:13:29.071      3.287 -     3.302:   25.8316%  (     1921)
00:13:29.071      3.302 -     3.316:   37.7752%  (     1763)
00:13:29.071      3.316 -     3.331:   44.1637%  (      943)
00:13:29.071      3.331 -     3.345:   48.1471%  (      588)
00:13:29.071      3.345 -     3.360:   53.2417%  (      752)
00:13:29.071      3.360 -     3.375:   58.7833%  (      818)
00:13:29.071      3.375 -     3.389:   63.6000%  (      711)
00:13:29.071      3.389 -     3.404:   67.3870%  (      559)
00:13:29.071      3.404 -     3.418:   69.5888%  (      325)
00:13:29.071      3.418 -     3.433:   71.3570%  (      261)
00:13:29.071      3.433 -     3.447:   73.5113%  (      318)
00:13:29.071      3.447 -     3.462:   75.4217%  (      282)
00:13:29.071      3.462 -     3.476:   77.3593%  (      286)
00:13:29.071      3.476 -     3.491:   79.0597%  (      251)
00:13:29.071      3.491 -     3.505:   79.9675%  (      134)
00:13:29.071      3.505 -     3.520:   80.7804%  (      120)
00:13:29.071      3.520 -     3.535:   81.5392%  (      112)
00:13:29.071      3.535 -     3.549:   82.3250%  (      116)
00:13:29.071      3.549 -     3.564:   83.3683%  (      154)
00:13:29.071      3.564 -     3.578:   84.0864%  (      106)
00:13:29.072      3.578 -     3.593:   84.5403%  (       67)
00:13:29.072      3.593 -     3.607:   85.1433%  (       89)
00:13:29.072      3.607 -     3.622:   85.9833%  (      124)
00:13:29.072      3.622 -     3.636:   87.3654%  (      204)
00:13:29.072      3.636 -     3.651:   89.2690%  (      281)
00:13:29.072      3.651 -     3.665:   90.4207%  (      170)
00:13:29.072      3.665 -     3.680:   91.4233%  (      148)
00:13:29.072      3.680 -     3.695:   92.2024%  (      115)
00:13:29.072      3.695 -     3.709:   92.8121%  (       90)
00:13:29.072      3.709 -     3.724:   93.4219%  (       90)
00:13:29.072      3.724 -     3.753:   94.2822%  (      127)
00:13:29.072      3.753 -     3.782:   95.3255%  (      154)
00:13:29.072      3.782 -     3.811:   96.1181%  (      117)
00:13:29.072      3.811 -     3.840:   96.8159%  (      103)
00:13:29.072      3.840 -     3.869:   97.1140%  (       44)
00:13:29.072      3.869 -     3.898:   97.3173%  (       30)
00:13:29.072      3.898 -     3.927:   97.4798%  (       24)
00:13:29.072      3.927 -     3.956:   97.6018%  (       18)
00:13:29.072      3.956 -     3.985:   97.7102%  (       16)
00:13:29.072      3.985 -     4.015:   97.8389%  (       19)
00:13:29.072      4.015 -     4.044:   97.8931%  (        8)
00:13:29.072      4.044 -     4.073:   97.9337%  (        6)
00:13:29.072      4.073 -     4.102:   98.0150%  (       12)
00:13:29.072      4.102 -     4.131:   98.1099%  (       14)
00:13:29.072      4.131 -     4.160:   98.2183%  (       16)
00:13:29.072      4.160 -     4.189:   98.2928%  (       11)
00:13:29.072      4.189 -     4.218:   98.3673%  (       11)
00:13:29.072      4.218 -     4.247:   98.4147%  (        7)
00:13:29.072      4.247 -     4.276:   98.4893%  (       11)
00:13:29.072      4.276 -     4.305:   98.5367%  (        7)
00:13:29.072      4.305 -     4.335:   98.5773%  (        6)
00:13:29.072      4.335 -     4.364:   98.6248%  (        7)
00:13:29.072      4.364 -     4.393:   98.6654%  (        6)
00:13:29.072      4.393 -     4.422:   98.6857%  (        3)
00:13:29.072      4.422 -     4.451:   98.7060%  (        3)
00:13:29.072      4.451 -     4.480:   98.7535%  (        7)
00:13:29.072      4.480 -     4.509:   98.7738%  (        3)
00:13:29.072      4.509 -     4.538:   98.7806%  (        1)
00:13:29.072      4.538 -     4.567:   98.8009%  (        3)
00:13:29.072      4.567 -     4.596:   98.8348%  (        5)
00:13:29.072      4.625 -     4.655:   98.8483%  (        2)
00:13:29.072      4.655 -     4.684:   98.8551%  (        1)
00:13:29.072      4.713 -     4.742:   98.8754%  (        3)
00:13:29.072      4.742 -     4.771:   98.8822%  (        1)
00:13:29.072      4.771 -     4.800:   98.8957%  (        2)
00:13:29.072      4.858 -     4.887:   98.9093%  (        2)
00:13:29.072      4.887 -     4.916:   98.9228%  (        2)
00:13:29.072      5.033 -     5.062:   98.9296%  (        1)
00:13:29.072      5.091 -     5.120:   98.9364%  (        1)
00:13:29.072      6.255 -     6.284:   98.9432%  (        1)
00:13:29.072      6.284 -     6.313:   98.9499%  (        1)
00:13:29.072      7.796 -     7.855:   98.9567%  (        1)
00:13:29.072      7.913 -     7.971:   98.9635%  (        1)
00:13:29.072      7.971 -     8.029:   98.9703%  (        1)
00:13:29.072      8.029 -     8.087:   98.9906%  (        3)
00:13:29.072      8.087 -     8.145:   98.9974%  (        1)
00:13:29.072      8.145 -     8.204:   99.0109%  (        2)
00:13:29.072      8.262 -     8.320:   99.0312%  (        3)
00:13:29.072      8.320 -     8.378:   99.0380%  (        1)
00:13:29.072      8.378 -     8.436:   99.0583%  (        3)
00:13:29.072      8.436 -     8.495:   99.0651%  (        1)
00:13:29.072      8.844 -     8.902:   99.0719%  (        1)
00:13:29.072      9.018 -     9.076:   99.0854%  (        2)
00:13:29.072      9.193 -     9.251:   99.1058%  (        3)
00:13:29.072      9.367 -     9.425:   99.1125%  (        1)
00:13:29.072      9.484 -     9.542:   99.1261%  (        2)
00:13:29.072      9.600 -     9.658:   99.1329%  (        1)
00:13:29.072      9.716 -     9.775:   99.1396%  (        1)
00:13:29.072      9.949 -    10.007:   99.1464%  (        1)
00:13:29.072     10.007 -    10.065:   99.1532%  (        1)
00:13:29.072     10.124 -    10.182:   99.1667%  (        2)
00:13:29.072     10.705 -    10.764:   99.1735%  (        1)
00:13:29.072     11.113 -    11.171:   99.1803%  (        1)
00:13:29.072     11.287 -    11.345:   99.1870%  (        1)
00:13:29.072     12.102 -    12.160:   99.1938%  (        1)
00:13:29.072     13.207 -    13.265:   99.2006%  (        1)
00:13:29.072     13.731 -    13.789:   99.2074%  (        1)
00:13:29.072     14.022 -    14.080:   99.2141%  (        1)
00:13:29.072     14.080 -    14.138:   99.2209%  (        1)
00:13:29.072     14.313 -    14.371:   99.2277%  (        1)
00:13:29.072     14.371 -    14.429:   99.2345%  (        1)
00:13:29.072     14.604 -    14.662:   99.2412%  (        1)
00:13:29.072     15.244 -    15.360:   99.2480%  (        1)
00:13:29.072     17.338 -    17.455:   99.2548%  (        1)
00:13:29.072     17.687 -    17.804:   99.2819%  (        4)
00:13:29.072     17.804 -    17.920:   99.3496%  (       10)
00:13:29.072     17.920 -    18.036:   99.4377%  (       13)
00:13:29.072     18.036 -    18.153:   99.4784%  (        6)
00:13:29.072     18.153 -    18.269:   99.5055%  (        4)
00:13:29.072     18.269 -    18.385:   99.5393%  (        5)
00:13:29.072     18.385 -    18.502:   99.5529%  (        2)
00:13:29.072     18.618 -    18.735:   99.5597%  (        1)
00:13:29.072     18.735 -    18.851:   99.5935%  (        5)
00:13:29.072     18.851 -    18.967:   99.6138%  (        3)
00:13:29.072     18.967 -    19.084:   99.6274%  (        2)
00:13:29.072     19.084 -    19.200:   99.6680%  (        6)
00:13:29.072     19.200 -    19.316:   99.6816%  (        2)
00:13:29.072     19.316 -    19.433:   99.6951%  (        2)
00:13:29.072     19.433 -    19.549:   99.7561%  (        9)
00:13:29.072     19.549 -    19.665:   99.7764%  (        3)
00:13:29.072     19.665 -    19.782:   99.8239%  (        7)
00:13:29.072     19.782 -    19.898:   99.8442%  (        3)
00:13:29.072     19.898 -    20.015:   99.8510%  (        1)
00:13:29.072     20.015 -    20.131:   99.8577%  (        1)
00:13:29.072     24.087 -    24.204:   99.8645%  (        1)
00:13:29.072     24.902 -    25.018:   99.8713%  (        1)
00:13:29.072     30.487 -    30.720:   99.8781%  (        1)
00:13:29.072     30.953 -    31.185:   99.8848%  (        1)
00:13:29.072   3961.949 -  3991.738:   99.8916%  (        1)
00:13:29.072   3991.738 -  4021.527:   99.9390%  (        7)
00:13:29.072   4021.527 -  4051.316:   99.9865%  (        7)
00:13:29.072   4051.316 -  4081.105:   99.9932%  (        1)
00:13:29.072   4081.105 -  4110.895:  100.0000%  (        1)
00:13:29.072  
00:13:29.072  Complete histogram
00:13:29.072  ==================
00:13:29.072         Range in us     Cumulative     Count
00:13:29.072      1.876 -     1.891:    5.4807%  (      809)
00:13:29.072      1.891 -     1.905:   30.9464%  (     3759)
00:13:29.072      1.905 -     1.920:   59.5691%  (     4225)
00:13:29.072      1.920 -     1.935:   64.9414%  (      793)
00:13:29.072      1.935 -     1.949:   66.5470%  (      237)
00:13:29.072      1.949 -     1.964:   75.1711%  (     1273)
00:13:29.072      1.964 -     1.978:   81.6882%  (      962)
00:13:29.072      1.978 -     1.993:   83.4090%  (      254)
00:13:29.072      1.993 -     2.007:   84.1271%  (      106)
00:13:29.072      2.007 -     2.022:   86.8166%  (      397)
00:13:29.072      2.022 -     2.036:   89.7365%  (      431)
00:13:29.072      2.036 -     2.051:   91.0778%  (      198)
00:13:29.072      2.051 -     2.065:   91.3285%  (       37)
00:13:29.072      2.065 -     2.080:   91.7079%  (       56)
00:13:29.072      2.080 -     2.095:   93.0493%  (      198)
00:13:29.072      2.095 -     2.109:   93.9909%  (      139)
00:13:29.072      2.109 -     2.124:   94.2890%  (       44)
00:13:29.072      2.124 -     2.138:   94.3432%  (        8)
00:13:29.072      2.138 -     2.153:   94.7158%  (       55)
00:13:29.072      2.153 -     2.167:   95.7388%  (      151)
00:13:29.072      2.167 -     2.182:   96.1385%  (       59)
00:13:29.072      2.182 -     2.196:   96.1994%  (        9)
00:13:29.072      2.196 -     2.211:   96.2807%  (       12)
00:13:29.072      2.211 -     2.225:   96.5314%  (       37)
00:13:29.072      2.225 -     2.240:   97.5205%  (      146)
00:13:29.072      2.240 -     2.255:   98.3267%  (      119)
00:13:29.072      2.255 -     2.269:   98.4147%  (       13)
00:13:29.072      2.269 -     2.284:   98.4622%  (        7)
00:13:29.072      2.284 -     2.298:   98.4893%  (        4)
00:13:29.072      2.298 -     2.313:   98.5435%  (        8)
00:13:29.072      2.313 -     2.327:   98.5773%  (        5)
00:13:29.072      2.327 -     2.342:   98.6180%  (        6)
00:13:29.072      2.342 -     2.356:   98.6519%  (        5)
00:13:29.072      2.356 -     2.371:   98.6722%  (        3)
00:13:29.072      2.371 -     2.385:   98.7060%  (        5)
00:13:29.072      2.385 -     2.400:   98.7128%  (        1)
00:13:29.072      3.084 -     3.098:   98.7196%  (        1)
00:13:29.072      3.520 -     3.535:   98.7264%  (        1)
00:13:29.072      3.607 -     3.622:   98.7399%  (        2)
00:13:29.072      3.636 -     3.651:   98.7670%  (        4)
00:13:29.072      3.651 -     3.665:   98.7738%  (        1)
00:13:29.072      3.709 -     3.724:   98.7806%  (        1)
00:13:29.072      3.724 -     3.753:   98.7873%  (        1)
00:13:29.072      3.753 -     3.782:   98.8144%  (        4)
00:13:29.072      3.811 -     3.840:   98.8212%  (        1)
00:13:29.072      3.869 -     3.898:   98.8280%  (        1)
00:13:29.072      3.898 -     3.927:   98.8348%  (        1)
00:13:29.072      4.015 -     4.044:   98.8551%  (        3)
00:13:29.072      4.102 -     4.131:   98.8686%  (        2)
00:13:29.072      4.131 -     4.160:   98.8754%  (        1)
00:13:29.072      4.160 -     4.189:   98.8822%  (        1)
00:13:29.072      4.305 -     4.335:   98.8957%  (        2)
00:13:29.072      4.422 -     4.451:   98.9025%  (        1)
00:13:29.072      4.480 -     4.509:   98.9093%  (        1)
00:13:29.072      4.509 -     4.538:   98.9161%  (        1)
00:13:29.072      4.625 -     4.655:   98.9228%  (        1)
00:13:29.072      4.655 -     4.684:   98.9296%  (        1)
00:13:29.072      4.771 -     4.800:   98.9364%  (        1)
00:13:29.072      4.829 -     4.858:   98.9432%  (        1)
00:13:29.072      6.400 -     6.429:   98.9499%  (        1)
00:13:29.072      6.516 -     6.545:   98.9567%  (        1)
00:13:29.072      6.633 -     6.662:   98.9703%  (        2)
00:13:29.072      6.720 -     6.749:   98.9770%  (        1)
00:13:29.072      6.749 -     6.778:   98.9838%  (        1)
00:13:29.072      6.778 -     6.807:   98.9906%  (        1)
00:13:29.072      7.011 -     7.040:   98.9974%  (        1)
00:13:29.072      7.040 -     7.069:   99.0041%  (        1)
00:13:29.072      7.273 -     7.302:   99.0109%  (        1)
00:13:29.072      7.331 -     7.360:   99.0177%  (        1)
00:13:29.072      7.505 -     7.564:   99.0245%  (        1)
00:13:29.072      7.680 -     7.738:   99.0380%  (        2)
00:13:29.072      7.738 -     7.796:   99.0516%  (        2)
00:13:29.072      7.855 -     7.913:   99.0583%  (        1)
00:13:29.072      7.971 -     8.029:   99.0651%  (        1)
00:13:29.072      8.436 -     8.495:   99.0719%  (        1)
00:13:29.072      8.611 -     8.669:   99.0787%  (        1)
00:13:29.073     11.462 -    11.520:   99.0854%  (        1)
00:13:29.073     11.869 -    11.927:   99.0922%  (        1)
00:13:29.073     12.509 -    12.567:   99.0990%  (        1)
00:13:29.073     16.291 -    16.407:   99.1329%  (        5)
00:13:29.073     16.407 -    16.524:   99.1870%  (        8)
00:13:29.073     16.524 -    16.640:   99.2074%  (        3)
00:13:29.073     16.640 -    16.756:   99.2209%  (        2)
00:13:29.073     16.756 -    16.873:   99.2616%  (        6)
00:13:29.073     16.873 -    16.989:   99.2751%  (        2)
00:13:29.073     16.989 -    17.105:   99.2887%  (        2)
00:13:29.073     17.105 -    17.222:   99.2954%  (        1)
00:13:29.073     17.222 -    17.338:   99.3022%  (        1)
00:13:29.073     17.338 -    17.455:   99.3158%  (        2)
00:13:29.073     17.571 -    17.687:   99.3293%  (        2)
00:13:29.073     17.687 -    17.804:   99.3361%  (        1)
00:13:29.073     17.804 -    17.920:   99.3632%  (        4)
00:13:29.073     17.920 -    18.036:   99.3835%  (        3)
00:13:29.073     18.036 -    18.153:   99.4106%  (        4)
00:13:29.073     18.153 -    18.269:   99.4513%  (        6)
00:13:29.073     18.269 -    18.385:   99.4716%  (        3)
00:13:29.073     18.502 -    18.618:   99.4784%  (        1)
00:13:29.073     19.898 -    20.015:   99.4851%  (        1)
00:13:29.073     27.927 -    28.044:   99.4919%  (        1)
00:13:29.073     30.487 -    30.720:   99.4987%  (        1)
00:13:29.073   3038.487 -  3053.382:   99.5055%  (        1)
00:13:29.073   3053.382 -  3068.276:   99.5122%  (        1)
00:13:29.073   3351.273 -  3366.167:   99.5190%  (        1)
00:13:29.073   3902.371 -  3932.160:   99.5258%  (        1)
00:13:29.073   3932.160 -  3961.949:   99.5326%  (        1)
00:13:29.073   3961.949 -  3991.738:   99.5732%  (        6)
00:13:29.073   3991.738 -  4021.527:   99.7561%  (       27)
00:13:29.073   4021.527 -  4051.316:   99.9255%  (       25)
00:13:29.073   4051.316 -  4081.105:   99.9797%  (        8)
00:13:29.073   4081.105 -  4110.895:   99.9932%  (        2)
00:13:29.073   4438.575 -  4468.364:  100.0000%  (        1)
00:13:29.073  
00:13:29.073   06:23:45	-- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1
00:13:29.073   06:23:45	-- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1
00:13:29.073   06:23:45	-- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1
00:13:29.073   06:23:45	-- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3
00:13:29.073   06:23:45	-- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems
00:13:29.073  [2024-12-16 06:23:46.027036] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05
00:13:29.073  [
00:13:29.073    {
00:13:29.073      "allow_any_host": true,
00:13:29.073      "hosts": [],
00:13:29.073      "listen_addresses": [],
00:13:29.073      "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:13:29.073      "subtype": "Discovery"
00:13:29.073    },
00:13:29.073    {
00:13:29.073      "allow_any_host": true,
00:13:29.073      "hosts": [],
00:13:29.073      "listen_addresses": [
00:13:29.073        {
00:13:29.073          "adrfam": "IPv4",
00:13:29.073          "traddr": "/var/run/vfio-user/domain/vfio-user1/1",
00:13:29.073          "transport": "VFIOUSER",
00:13:29.073          "trsvcid": "0",
00:13:29.073          "trtype": "VFIOUSER"
00:13:29.073        }
00:13:29.073      ],
00:13:29.073      "max_cntlid": 65519,
00:13:29.073      "max_namespaces": 32,
00:13:29.073      "min_cntlid": 1,
00:13:29.073      "model_number": "SPDK bdev Controller",
00:13:29.073      "namespaces": [
00:13:29.073        {
00:13:29.073          "bdev_name": "Malloc1",
00:13:29.073          "name": "Malloc1",
00:13:29.073          "nguid": "8FF1D1354DCE473EBA170C04A13C8B91",
00:13:29.073          "nsid": 1,
00:13:29.073          "uuid": "8ff1d135-4dce-473e-ba17-0c04a13c8b91"
00:13:29.073        }
00:13:29.073      ],
00:13:29.073      "nqn": "nqn.2019-07.io.spdk:cnode1",
00:13:29.073      "serial_number": "SPDK1",
00:13:29.073      "subtype": "NVMe"
00:13:29.073    },
00:13:29.073    {
00:13:29.073      "allow_any_host": true,
00:13:29.073      "hosts": [],
00:13:29.073      "listen_addresses": [
00:13:29.073        {
00:13:29.073          "adrfam": "IPv4",
00:13:29.073          "traddr": "/var/run/vfio-user/domain/vfio-user2/2",
00:13:29.073          "transport": "VFIOUSER",
00:13:29.073          "trsvcid": "0",
00:13:29.073          "trtype": "VFIOUSER"
00:13:29.073        }
00:13:29.073      ],
00:13:29.073      "max_cntlid": 65519,
00:13:29.073      "max_namespaces": 32,
00:13:29.073      "min_cntlid": 1,
00:13:29.073      "model_number": "SPDK bdev Controller",
00:13:29.073      "namespaces": [
00:13:29.073        {
00:13:29.073          "bdev_name": "Malloc2",
00:13:29.073          "name": "Malloc2",
00:13:29.073          "nguid": "E46778BD35044C4A99BC113EB5639C6C",
00:13:29.073          "nsid": 1,
00:13:29.073          "uuid": "e46778bd-3504-4c4a-99bc-113eb5639c6c"
00:13:29.073        }
00:13:29.073      ],
00:13:29.073      "nqn": "nqn.2019-07.io.spdk:cnode2",
00:13:29.073      "serial_number": "SPDK2",
00:13:29.073      "subtype": "NVMe"
00:13:29.073    }
00:13:29.073  ]
00:13:29.330   06:23:46	-- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file
00:13:29.330   06:23:46	-- target/nvmf_vfio_user.sh@34 -- # aerpid=71048
00:13:29.330   06:23:46	-- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r '		trtype:VFIOUSER 		traddr:/var/run/vfio-user/domain/vfio-user1/1 		subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file
00:13:29.330   06:23:46	-- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file
00:13:29.330   06:23:46	-- common/autotest_common.sh@1254 -- # local i=0
00:13:29.331   06:23:46	-- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:13:29.331   06:23:46	-- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']'
00:13:29.331   06:23:46	-- common/autotest_common.sh@1257 -- # i=1
00:13:29.331   06:23:46	-- common/autotest_common.sh@1258 -- # sleep 0.1
00:13:29.331   06:23:46	-- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:13:29.331   06:23:46	-- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']'
00:13:29.331   06:23:46	-- common/autotest_common.sh@1257 -- # i=2
00:13:29.331   06:23:46	-- common/autotest_common.sh@1258 -- # sleep 0.1
00:13:29.331   06:23:46	-- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:13:29.331   06:23:46	-- common/autotest_common.sh@1256 -- # '[' 2 -lt 200 ']'
00:13:29.331   06:23:46	-- common/autotest_common.sh@1257 -- # i=3
00:13:29.331   06:23:46	-- common/autotest_common.sh@1258 -- # sleep 0.1
00:13:29.589   06:23:46	-- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:13:29.589   06:23:46	-- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:13:29.589   06:23:46	-- common/autotest_common.sh@1265 -- # return 0
00:13:29.589   06:23:46	-- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file
00:13:29.589   06:23:46	-- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3
00:13:29.846  Malloc3
00:13:29.846   06:23:46	-- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2
00:13:30.103   06:23:46	-- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems
00:13:30.103  Asynchronous Event Request test
00:13:30.103  Attaching to /var/run/vfio-user/domain/vfio-user1/1
00:13:30.103  Attached to /var/run/vfio-user/domain/vfio-user1/1
00:13:30.103  Registering asynchronous event callbacks...
00:13:30.103  Starting namespace attribute notice tests for all controllers...
00:13:30.103  /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00
00:13:30.103  aer_cb - Changed Namespace
00:13:30.103  Cleaning up...
00:13:30.361  [
00:13:30.361    {
00:13:30.361      "allow_any_host": true,
00:13:30.361      "hosts": [],
00:13:30.361      "listen_addresses": [],
00:13:30.361      "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:13:30.361      "subtype": "Discovery"
00:13:30.361    },
00:13:30.361    {
00:13:30.361      "allow_any_host": true,
00:13:30.361      "hosts": [],
00:13:30.361      "listen_addresses": [
00:13:30.361        {
00:13:30.361          "adrfam": "IPv4",
00:13:30.361          "traddr": "/var/run/vfio-user/domain/vfio-user1/1",
00:13:30.361          "transport": "VFIOUSER",
00:13:30.361          "trsvcid": "0",
00:13:30.361          "trtype": "VFIOUSER"
00:13:30.361        }
00:13:30.361      ],
00:13:30.361      "max_cntlid": 65519,
00:13:30.361      "max_namespaces": 32,
00:13:30.361      "min_cntlid": 1,
00:13:30.361      "model_number": "SPDK bdev Controller",
00:13:30.361      "namespaces": [
00:13:30.361        {
00:13:30.361          "bdev_name": "Malloc1",
00:13:30.361          "name": "Malloc1",
00:13:30.361          "nguid": "8FF1D1354DCE473EBA170C04A13C8B91",
00:13:30.361          "nsid": 1,
00:13:30.361          "uuid": "8ff1d135-4dce-473e-ba17-0c04a13c8b91"
00:13:30.361        },
00:13:30.361        {
00:13:30.361          "bdev_name": "Malloc3",
00:13:30.361          "name": "Malloc3",
00:13:30.361          "nguid": "536AF21BB4E94D2A81AC9964ACE681D1",
00:13:30.361          "nsid": 2,
00:13:30.361          "uuid": "536af21b-b4e9-4d2a-81ac-9964ace681d1"
00:13:30.361        }
00:13:30.361      ],
00:13:30.361      "nqn": "nqn.2019-07.io.spdk:cnode1",
00:13:30.361      "serial_number": "SPDK1",
00:13:30.361      "subtype": "NVMe"
00:13:30.361    },
00:13:30.361    {
00:13:30.361      "allow_any_host": true,
00:13:30.361      "hosts": [],
00:13:30.361      "listen_addresses": [
00:13:30.361        {
00:13:30.361          "adrfam": "IPv4",
00:13:30.361          "traddr": "/var/run/vfio-user/domain/vfio-user2/2",
00:13:30.361          "transport": "VFIOUSER",
00:13:30.361          "trsvcid": "0",
00:13:30.361          "trtype": "VFIOUSER"
00:13:30.361        }
00:13:30.361      ],
00:13:30.361      "max_cntlid": 65519,
00:13:30.361      "max_namespaces": 32,
00:13:30.361      "min_cntlid": 1,
00:13:30.361      "model_number": "SPDK bdev Controller",
00:13:30.361      "namespaces": [
00:13:30.361        {
00:13:30.361          "bdev_name": "Malloc2",
00:13:30.361          "name": "Malloc2",
00:13:30.361          "nguid": "E46778BD35044C4A99BC113EB5639C6C",
00:13:30.361          "nsid": 1,
00:13:30.361          "uuid": "e46778bd-3504-4c4a-99bc-113eb5639c6c"
00:13:30.361        }
00:13:30.361      ],
00:13:30.361      "nqn": "nqn.2019-07.io.spdk:cnode2",
00:13:30.361      "serial_number": "SPDK2",
00:13:30.361      "subtype": "NVMe"
00:13:30.361    }
00:13:30.361  ]
00:13:30.361   06:23:47	-- target/nvmf_vfio_user.sh@44 -- # wait 71048
00:13:30.361   06:23:47	-- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES)
00:13:30.361   06:23:47	-- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2
00:13:30.361   06:23:47	-- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2
00:13:30.361   06:23:47	-- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci
00:13:30.361  [2024-12-16 06:23:47.209673] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:13:30.361  [2024-12-16 06:23:47.209713] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71086 ]
00:13:30.621  [2024-12-16 06:23:47.343625] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2
00:13:30.622  [2024-12-16 06:23:47.348810] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32
00:13:30.622  [2024-12-16 06:23:47.348858] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3063eed000
00:13:30.622  [2024-12-16 06:23:47.349803] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:30.622  [2024-12-16 06:23:47.350811] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:30.622  [2024-12-16 06:23:47.351810] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:30.622  [2024-12-16 06:23:47.352809] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0
00:13:30.622  [2024-12-16 06:23:47.353815] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:13:30.622  [2024-12-16 06:23:47.354829] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:30.622  [2024-12-16 06:23:47.358499] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:13:30.622  [2024-12-16 06:23:47.358840] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:30.622  [2024-12-16 06:23:47.359845] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32
00:13:30.622  [2024-12-16 06:23:47.359887] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f30634ba000
00:13:30.622  [2024-12-16 06:23:47.361091] vfio_user_pci.c:  65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000
00:13:30.622  [2024-12-16 06:23:47.377877] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully
00:13:30.622  [2024-12-16 06:23:47.377931] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout)
00:13:30.622  [2024-12-16 06:23:47.380011] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff
00:13:30.622  [2024-12-16 06:23:47.380085] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192
00:13:30.622  [2024-12-16 06:23:47.380162] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout)
00:13:30.622  [2024-12-16 06:23:47.380187] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout)
00:13:30.622  [2024-12-16 06:23:47.380193] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout)
00:13:30.622  [2024-12-16 06:23:47.381014] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300
00:13:30.622  [2024-12-16 06:23:47.381055] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout)
00:13:30.622  [2024-12-16 06:23:47.381066] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout)
00:13:30.622  [2024-12-16 06:23:47.382027] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff
00:13:30.622  [2024-12-16 06:23:47.382068] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout)
00:13:30.622  [2024-12-16 06:23:47.382080] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms)
00:13:30.622  [2024-12-16 06:23:47.383023] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0
00:13:30.622  [2024-12-16 06:23:47.383063] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms)
00:13:30.622  [2024-12-16 06:23:47.384028] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0
00:13:30.622  [2024-12-16 06:23:47.384050] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0
00:13:30.622  [2024-12-16 06:23:47.384074] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms)
00:13:30.622  [2024-12-16 06:23:47.384083] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms)
00:13:30.622  [2024-12-16 06:23:47.384189] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1
00:13:30.622  [2024-12-16 06:23:47.384194] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms)
00:13:30.622  [2024-12-16 06:23:47.384199] nvme_vfio_user.c:  61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000
00:13:30.622  [2024-12-16 06:23:47.385031] nvme_vfio_user.c:  61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000
00:13:30.622  [2024-12-16 06:23:47.386029] nvme_vfio_user.c:  49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff
00:13:30.622  [2024-12-16 06:23:47.387032] nvme_vfio_user.c:  49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001
00:13:30.622  [2024-12-16 06:23:47.388061] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms)
00:13:30.622  [2024-12-16 06:23:47.389040] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1
00:13:30.622  [2024-12-16 06:23:47.389080] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready
00:13:30.622  [2024-12-16 06:23:47.389087] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms)
00:13:30.622  [2024-12-16 06:23:47.389107] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout)
00:13:30.622  [2024-12-16 06:23:47.389123] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms)
00:13:30.622  [2024-12-16 06:23:47.389137] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096
00:13:30.622  [2024-12-16 06:23:47.389144] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:13:30.622  [2024-12-16 06:23:47.389157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:13:30.622  [2024-12-16 06:23:47.395609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0
00:13:30.622  [2024-12-16 06:23:47.395650] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072
00:13:30.622  [2024-12-16 06:23:47.395657] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072
00:13:30.622  [2024-12-16 06:23:47.395661] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001
00:13:30.622  [2024-12-16 06:23:47.395666] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000
00:13:30.622  [2024-12-16 06:23:47.395671] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1
00:13:30.622  [2024-12-16 06:23:47.395675] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1
00:13:30.622  [2024-12-16 06:23:47.395680] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms)
00:13:30.622  [2024-12-16 06:23:47.395695] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms)
00:13:30.622  [2024-12-16 06:23:47.395707] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0
00:13:30.622  [2024-12-16 06:23:47.403587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0
00:13:30.622  [2024-12-16 06:23:47.403635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:30.622  [2024-12-16 06:23:47.403645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:30.622  [2024-12-16 06:23:47.403654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:30.622  [2024-12-16 06:23:47.403662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:30.622  [2024-12-16 06:23:47.403668] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms)
00:13:30.622  [2024-12-16 06:23:47.403680] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms)
00:13:30.622  [2024-12-16 06:23:47.403690] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0
00:13:30.622  [2024-12-16 06:23:47.411559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0
00:13:30.622  [2024-12-16 06:23:47.411578] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms
00:13:30.622  [2024-12-16 06:23:47.411602] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms)
00:13:30.622  [2024-12-16 06:23:47.411623] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms)
00:13:30.622  [2024-12-16 06:23:47.411637] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms)
00:13:30.622  [2024-12-16 06:23:47.411648] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:13:30.622  [2024-12-16 06:23:47.419543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0
00:13:30.622  [2024-12-16 06:23:47.419628] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms)
00:13:30.622  [2024-12-16 06:23:47.419641] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms)
00:13:30.622  [2024-12-16 06:23:47.419650] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096
00:13:30.622  [2024-12-16 06:23:47.419655] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000
00:13:30.622  [2024-12-16 06:23:47.419662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0
00:13:30.622  [2024-12-16 06:23:47.427580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0
00:13:30.622  [2024-12-16 06:23:47.427628] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added
00:13:30.623  [2024-12-16 06:23:47.427641] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms)
00:13:30.623  [2024-12-16 06:23:47.427651] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms)
00:13:30.623  [2024-12-16 06:23:47.427660] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096
00:13:30.623  [2024-12-16 06:23:47.427665] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:13:30.623  [2024-12-16 06:23:47.427672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:13:30.623  [2024-12-16 06:23:47.435559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0
00:13:30.623  [2024-12-16 06:23:47.435606] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms)
00:13:30.623  [2024-12-16 06:23:47.435619] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms)
00:13:30.623  [2024-12-16 06:23:47.435629] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096
00:13:30.623  [2024-12-16 06:23:47.435634] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:13:30.623  [2024-12-16 06:23:47.435640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:13:30.623  [2024-12-16 06:23:47.443559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0
00:13:30.623  [2024-12-16 06:23:47.443598] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms)
00:13:30.623  [2024-12-16 06:23:47.443609] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms)
00:13:30.623  [2024-12-16 06:23:47.443621] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms)
00:13:30.623  [2024-12-16 06:23:47.443628] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms)
00:13:30.623  [2024-12-16 06:23:47.443633] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms)
00:13:30.623  [2024-12-16 06:23:47.443638] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID
00:13:30.623  [2024-12-16 06:23:47.443643] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms)
00:13:30.623  [2024-12-16 06:23:47.443648] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout)
00:13:30.623  [2024-12-16 06:23:47.443668] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0
00:13:30.623  [2024-12-16 06:23:47.451545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0
00:13:30.623  [2024-12-16 06:23:47.451588] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0
00:13:30.623  [2024-12-16 06:23:47.459562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0
00:13:30.623  [2024-12-16 06:23:47.459604] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0
00:13:30.623  [2024-12-16 06:23:47.467560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0
00:13:30.623  [2024-12-16 06:23:47.467603] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:13:30.623  [2024-12-16 06:23:47.475544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0
00:13:30.623  [2024-12-16 06:23:47.475587] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192
00:13:30.623  [2024-12-16 06:23:47.475594] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000
00:13:30.623  [2024-12-16 06:23:47.475598] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000
00:13:30.623  [2024-12-16 06:23:47.475601] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000
00:13:30.623  [2024-12-16 06:23:47.475608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000
00:13:30.623  [2024-12-16 06:23:47.475616] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512
00:13:30.623  [2024-12-16 06:23:47.475620] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000
00:13:30.623  [2024-12-16 06:23:47.475626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0
00:13:30.623  [2024-12-16 06:23:47.475633] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512
00:13:30.623  [2024-12-16 06:23:47.475637] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:13:30.623  [2024-12-16 06:23:47.475643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:13:30.623  [2024-12-16 06:23:47.475651] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096
00:13:30.623  [2024-12-16 06:23:47.475655] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000
00:13:30.623  [2024-12-16 06:23:47.475661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0
00:13:30.623  [2024-12-16 06:23:47.483556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0
00:13:30.623  [2024-12-16 06:23:47.483605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0
00:13:30.623  [2024-12-16 06:23:47.483618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0
00:13:30.623  [2024-12-16 06:23:47.483626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0
00:13:30.623  =====================================================
00:13:30.623  NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2
00:13:30.623  =====================================================
00:13:30.623  Controller Capabilities/Features
00:13:30.623  ================================
00:13:30.623  Vendor ID:                             4e58
00:13:30.623  Subsystem Vendor ID:                   4e58
00:13:30.623  Serial Number:                         SPDK2
00:13:30.623  Model Number:                          SPDK bdev Controller
00:13:30.623  Firmware Version:                      24.01.1
00:13:30.623  Recommended Arb Burst:                 6
00:13:30.623  IEEE OUI Identifier:                   8d 6b 50
00:13:30.623  Multi-path I/O
00:13:30.623    May have multiple subsystem ports:   Yes
00:13:30.623    May have multiple controllers:       Yes
00:13:30.623    Associated with SR-IOV VF:           No
00:13:30.623  Max Data Transfer Size:                131072
00:13:30.623  Max Number of Namespaces:              32
00:13:30.623  Max Number of I/O Queues:              127
00:13:30.623  NVMe Specification Version (VS):       1.3
00:13:30.623  NVMe Specification Version (Identify): 1.3
00:13:30.623  Maximum Queue Entries:                 256
00:13:30.623  Contiguous Queues Required:            Yes
00:13:30.623  Arbitration Mechanisms Supported
00:13:30.623    Weighted Round Robin:                Not Supported
00:13:30.623    Vendor Specific:                     Not Supported
00:13:30.623  Reset Timeout:                         15000 ms
00:13:30.623  Doorbell Stride:                       4 bytes
00:13:30.623  NVM Subsystem Reset:                   Not Supported
00:13:30.623  Command Sets Supported
00:13:30.623    NVM Command Set:                     Supported
00:13:30.623  Boot Partition:                        Not Supported
00:13:30.623  Memory Page Size Minimum:              4096 bytes
00:13:30.623  Memory Page Size Maximum:              4096 bytes
00:13:30.623  Persistent Memory Region:              Not Supported
00:13:30.623  Optional Asynchronous Events Supported
00:13:30.623    Namespace Attribute Notices:         Supported
00:13:30.623    Firmware Activation Notices:         Not Supported
00:13:30.623    ANA Change Notices:                  Not Supported
00:13:30.623    PLE Aggregate Log Change Notices:    Not Supported
00:13:30.623    LBA Status Info Alert Notices:       Not Supported
00:13:30.623    EGE Aggregate Log Change Notices:    Not Supported
00:13:30.623    Normal NVM Subsystem Shutdown event: Not Supported
00:13:30.623    Zone Descriptor Change Notices:      Not Supported
00:13:30.623    Discovery Log Change Notices:        Not Supported
00:13:30.623  Controller Attributes
00:13:30.623    128-bit Host Identifier:             Supported
00:13:30.623    Non-Operational Permissive Mode:     Not Supported
00:13:30.623    NVM Sets:                            Not Supported
00:13:30.623    Read Recovery Levels:                Not Supported
00:13:30.623    Endurance Groups:                    Not Supported
00:13:30.623    Predictable Latency Mode:            Not Supported
00:13:30.623    Traffic Based Keep ALive:            Not Supported
00:13:30.623    Namespace Granularity:               Not Supported
00:13:30.623    SQ Associations:                     Not Supported
00:13:30.623    UUID List:                           Not Supported
00:13:30.623    Multi-Domain Subsystem:              Not Supported
00:13:30.623    Fixed Capacity Management:           Not Supported
00:13:30.623    Variable Capacity Management:        Not Supported
00:13:30.623    Delete Endurance Group:              Not Supported
00:13:30.623    Delete NVM Set:                      Not Supported
00:13:30.623    Extended LBA Formats Supported:      Not Supported
00:13:30.623    Flexible Data Placement Supported:   Not Supported
00:13:30.623  
00:13:30.623  Controller Memory Buffer Support
00:13:30.623  ================================
00:13:30.623  Supported:                             No
00:13:30.623  
00:13:30.623  Persistent Memory Region Support
00:13:30.623  ================================
00:13:30.623  Supported:                             No
00:13:30.623  
00:13:30.623  Admin Command Set Attributes
00:13:30.623  ============================
00:13:30.623  Security Send/Receive:                 Not Supported
00:13:30.623  Format NVM:                            Not Supported
00:13:30.623  Firmware Activate/Download:            Not Supported
00:13:30.623  Namespace Management:                  Not Supported
00:13:30.623  Device Self-Test:                      Not Supported
00:13:30.623  Directives:                            Not Supported
00:13:30.623  NVMe-MI:                               Not Supported
00:13:30.623  Virtualization Management:             Not Supported
00:13:30.623  Doorbell Buffer Config:                Not Supported
00:13:30.623  Get LBA Status Capability:             Not Supported
00:13:30.623  Command & Feature Lockdown Capability: Not Supported
00:13:30.623  Abort Command Limit:                   4
00:13:30.623  Async Event Request Limit:             4
00:13:30.623  Number of Firmware Slots:              N/A
00:13:30.623  Firmware Slot 1 Read-Only:             N/A
00:13:30.623  Firmware Activation Without Reset:     N/A
00:13:30.624  Multiple Update Detection Support:     N/A
00:13:30.624  Firmware Update Granularity:           No Information Provided
00:13:30.624  Per-Namespace SMART Log:               No
00:13:30.624  Asymmetric Namespace Access Log Page:  Not Supported
00:13:30.624  Subsystem NQN:                         nqn.2019-07.io.spdk:cnode2
00:13:30.624  Command Effects Log Page:              Supported
00:13:30.624  Get Log Page Extended Data:            Supported
00:13:30.624  Telemetry Log Pages:                   Not Supported
00:13:30.624  Persistent Event Log Pages:            Not Supported
00:13:30.624  Supported Log Pages Log Page:          May Support
00:13:30.624  Commands Supported & Effects Log Page: Not Supported
00:13:30.624  Feature Identifiers & Effects Log Page:May Support
00:13:30.624  NVMe-MI Commands & Effects Log Page:   May Support
00:13:30.624  Data Area 4 for Telemetry Log:         Not Supported
00:13:30.624  Error Log Page Entries Supported:      128
00:13:30.624  Keep Alive:                            Supported
00:13:30.624  Keep Alive Granularity:                10000 ms
00:13:30.624  
00:13:30.624  NVM Command Set Attributes
00:13:30.624  ==========================
00:13:30.624  Submission Queue Entry Size
00:13:30.624    Max:                       64
00:13:30.624    Min:                       64
00:13:30.624  Completion Queue Entry Size
00:13:30.624    Max:                       16
00:13:30.624    Min:                       16
00:13:30.624  Number of Namespaces:        32
00:13:30.624  Compare Command:             Supported
00:13:30.624  Write Uncorrectable Command: Not Supported
00:13:30.624  Dataset Management Command:  Supported
00:13:30.624  Write Zeroes Command:        Supported
00:13:30.624  Set Features Save Field:     Not Supported
00:13:30.624  Reservations:                Not Supported
00:13:30.624  Timestamp:                   Not Supported
00:13:30.624  Copy:                        Supported
00:13:30.624  Volatile Write Cache:        Present
00:13:30.624  Atomic Write Unit (Normal):  1
00:13:30.624  Atomic Write Unit (PFail):   1
00:13:30.624  Atomic Compare & Write Unit: 1
00:13:30.624  Fused Compare & Write:       Supported
00:13:30.624  Scatter-Gather List
00:13:30.624    SGL Command Set:           Supported (Dword aligned)
00:13:30.624    SGL Keyed:                 Not Supported
00:13:30.624    SGL Bit Bucket Descriptor: Not Supported
00:13:30.624    SGL Metadata Pointer:      Not Supported
00:13:30.624    Oversized SGL:             Not Supported
00:13:30.624    SGL Metadata Address:      Not Supported
00:13:30.624    SGL Offset:                Not Supported
00:13:30.624    Transport SGL Data Block:  Not Supported
00:13:30.624  Replay Protected Memory Block:  Not Supported
00:13:30.624  
00:13:30.624  Firmware Slot Information
00:13:30.624  =========================
00:13:30.624  Active slot:                 1
00:13:30.624  Slot 1 Firmware Revision:    24.01.1
00:13:30.624  
00:13:30.624  
00:13:30.624  Commands Supported and Effects
00:13:30.624  ==============================
00:13:30.624  Admin Commands
00:13:30.624  --------------
00:13:30.624                    Get Log Page (02h): Supported 
00:13:30.624                        Identify (06h): Supported 
00:13:30.624                           Abort (08h): Supported 
00:13:30.624                    Set Features (09h): Supported 
00:13:30.624                    Get Features (0Ah): Supported 
00:13:30.624      Asynchronous Event Request (0Ch): Supported 
00:13:30.624                      Keep Alive (18h): Supported 
00:13:30.624  I/O Commands
00:13:30.624  ------------
00:13:30.624                           Flush (00h): Supported LBA-Change 
00:13:30.624                           Write (01h): Supported LBA-Change 
00:13:30.624                            Read (02h): Supported 
00:13:30.624                         Compare (05h): Supported 
00:13:30.624                    Write Zeroes (08h): Supported LBA-Change 
00:13:30.624              Dataset Management (09h): Supported LBA-Change 
00:13:30.624                            Copy (19h): Supported LBA-Change 
00:13:30.624                         Unknown (79h): Supported LBA-Change 
00:13:30.624                         Unknown (7Ah): Supported 
00:13:30.624  
00:13:30.624  Error Log
00:13:30.624  =========
00:13:30.624  
00:13:30.624  Arbitration
00:13:30.624  ===========
00:13:30.624  Arbitration Burst:           1
00:13:30.624  
00:13:30.624  Power Management
00:13:30.624  ================
00:13:30.624  Number of Power States:          1
00:13:30.624  Current Power State:             Power State #0
00:13:30.624  Power State #0:
00:13:30.624    Max Power:                      0.00 W
00:13:30.624    Non-Operational State:         Operational
00:13:30.624    Entry Latency:                 Not Reported
00:13:30.624    Exit Latency:                  Not Reported
00:13:30.624    Relative Read Throughput:      0
00:13:30.624    Relative Read Latency:         0
00:13:30.624    Relative Write Throughput:     0
00:13:30.624    Relative Write Latency:        0
00:13:30.624    Idle Power:                     Not Reported
00:13:30.624    Active Power:                   Not Reported
00:13:30.624  Non-Operational Permissive Mode: Not Supported
00:13:30.624  
00:13:30.624  Health Information
00:13:30.624  ==================
00:13:30.624  Critical Warnings:
00:13:30.624    Available Spare Space:     OK
00:13:30.624    Temperature:               OK
00:13:30.624    Device Reliability:        OK
00:13:30.624    Read Only:                 No
00:13:30.624    Volatile Memory Backup:    OK
00:13:30.624  Current Temperature:         0 Kelvin[2024-12-16 06:23:47.483732] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0
00:13:30.624  [2024-12-16 06:23:47.491583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0
00:13:30.624  [2024-12-16 06:23:47.491646] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD
00:13:30.624  [2024-12-16 06:23:47.491659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:30.624  [2024-12-16 06:23:47.491666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:30.624  [2024-12-16 06:23:47.491672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:30.624  [2024-12-16 06:23:47.491678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:30.624  [2024-12-16 06:23:47.491738] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001
00:13:30.624  [2024-12-16 06:23:47.491755] nvme_vfio_user.c:  49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001
00:13:30.624  [2024-12-16 06:23:47.492778] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us
00:13:30.624  [2024-12-16 06:23:47.492798] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms
00:13:30.624  [2024-12-16 06:23:47.493742] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9
00:13:30.624  [2024-12-16 06:23:47.493786] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds
00:13:30.624  [2024-12-16 06:23:47.493843] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl
00:13:30.624  [2024-12-16 06:23:47.496501] vfio_user_pci.c:  96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000
00:13:30.624   (-273 Celsius)
00:13:30.624  Temperature Threshold:       0 Kelvin (-273 Celsius)
00:13:30.624  Available Spare:             0%
00:13:30.624  Available Spare Threshold:   0%
00:13:30.624  Life Percentage Used:        0%
00:13:30.624  Data Units Read:             0
00:13:30.624  Data Units Written:          0
00:13:30.624  Host Read Commands:          0
00:13:30.624  Host Write Commands:         0
00:13:30.624  Controller Busy Time:        0 minutes
00:13:30.624  Power Cycles:                0
00:13:30.624  Power On Hours:              0 hours
00:13:30.624  Unsafe Shutdowns:            0
00:13:30.624  Unrecoverable Media Errors:  0
00:13:30.624  Lifetime Error Log Entries:  0
00:13:30.624  Warning Temperature Time:    0 minutes
00:13:30.624  Critical Temperature Time:   0 minutes
00:13:30.624  
00:13:30.624  Number of Queues
00:13:30.624  ================
00:13:30.624  Number of I/O Submission Queues:      127
00:13:30.624  Number of I/O Completion Queues:      127
00:13:30.624  
00:13:30.624  Active Namespaces
00:13:30.624  =================
00:13:30.624  Namespace ID:1
00:13:30.624  Error Recovery Timeout:                Unlimited
00:13:30.624  Command Set Identifier:                NVM (00h)
00:13:30.624  Deallocate:                            Supported
00:13:30.624  Deallocated/Unwritten Error:           Not Supported
00:13:30.624  Deallocated Read Value:                Unknown
00:13:30.624  Deallocate in Write Zeroes:            Not Supported
00:13:30.624  Deallocated Guard Field:               0xFFFF
00:13:30.624  Flush:                                 Supported
00:13:30.624  Reservation:                           Supported
00:13:30.624  Namespace Sharing Capabilities:        Multiple Controllers
00:13:30.624  Size (in LBAs):                        131072 (0GiB)
00:13:30.624  Capacity (in LBAs):                    131072 (0GiB)
00:13:30.624  Utilization (in LBAs):                 131072 (0GiB)
00:13:30.624  NGUID:                                 E46778BD35044C4A99BC113EB5639C6C
00:13:30.624  UUID:                                  e46778bd-3504-4c4a-99bc-113eb5639c6c
00:13:30.624  Thin Provisioning:                     Not Supported
00:13:30.624  Per-NS Atomic Units:                   Yes
00:13:30.624    Atomic Boundary Size (Normal):       0
00:13:30.624    Atomic Boundary Size (PFail):        0
00:13:30.624    Atomic Boundary Offset:              0
00:13:30.624  Maximum Single Source Range Length:    65535
00:13:30.624  Maximum Copy Length:                   65535
00:13:30.624  Maximum Source Range Count:            1
00:13:30.624  NGUID/EUI64 Never Reused:              No
00:13:30.624  Namespace Write Protected:             No
00:13:30.624  Number of LBA Formats:                 1
00:13:30.624  Current LBA Format:                    LBA Format #00
00:13:30.624  LBA Format #00: Data Size:   512  Metadata Size:     0
00:13:30.624  
00:13:30.624   06:23:47	-- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2
00:13:37.182  Initializing NVMe Controllers
00:13:37.182  Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2
00:13:37.182  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1
00:13:37.182  Initialization complete. Launching workers.
00:13:37.182  ========================================================
00:13:37.182                                                                                                           Latency(us)
00:13:37.182  Device Information                                                   :       IOPS      MiB/s    Average        min        max
00:13:37.182  VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core  1:   38848.72     151.75    3294.45    1034.92   10647.91
00:13:37.182  ========================================================
00:13:37.182  Total                                                                :   38848.72     151.75    3294.45    1034.92   10647.91
00:13:37.182  
00:13:37.182   06:23:52	-- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2
00:13:41.362  Initializing NVMe Controllers
00:13:41.362  Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2
00:13:41.362  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1
00:13:41.362  Initialization complete. Launching workers.
00:13:41.362  ========================================================
00:13:41.362                                                                                                           Latency(us)
00:13:41.362  Device Information                                                   :       IOPS      MiB/s    Average        min        max
00:13:41.362  VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core  1:   39273.93     153.41    3259.02    1025.44   11776.98
00:13:41.362  ========================================================
00:13:41.362  Total                                                                :   39273.93     153.41    3259.02    1025.44   11776.98
00:13:41.362  
00:13:41.362   06:23:58	-- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE
00:13:47.920  Initializing NVMe Controllers
00:13:47.920  Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2
00:13:47.920  Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2
00:13:47.920  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1
00:13:47.920  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2
00:13:47.920  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3
00:13:47.920  Initialization complete. Launching workers.
00:13:47.920  Starting thread on core 2
00:13:47.920  Starting thread on core 3
00:13:47.920  Starting thread on core 1
00:13:47.920   06:24:03	-- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g
00:13:50.451  Initializing NVMe Controllers
00:13:50.451  Attaching to /var/run/vfio-user/domain/vfio-user2/2
00:13:50.451  Attached to /var/run/vfio-user/domain/vfio-user2/2
00:13:50.451  Associating SPDK bdev Controller (SPDK2               ) with lcore 0
00:13:50.451  Associating SPDK bdev Controller (SPDK2               ) with lcore 1
00:13:50.451  Associating SPDK bdev Controller (SPDK2               ) with lcore 2
00:13:50.451  Associating SPDK bdev Controller (SPDK2               ) with lcore 3
00:13:50.451  /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration:
00:13:50.451  /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1
00:13:50.451  Initialization complete. Launching workers.
00:13:50.451  Starting thread on core 1 with urgent priority queue
00:13:50.451  Starting thread on core 2 with urgent priority queue
00:13:50.451  Starting thread on core 3 with urgent priority queue
00:13:50.451  Starting thread on core 0 with urgent priority queue
00:13:50.451  SPDK bdev Controller (SPDK2               ) core 0:  7360.00 IO/s    13.59 secs/100000 ios
00:13:50.451  SPDK bdev Controller (SPDK2               ) core 1:  8710.00 IO/s    11.48 secs/100000 ios
00:13:50.451  SPDK bdev Controller (SPDK2               ) core 2:  7738.33 IO/s    12.92 secs/100000 ios
00:13:50.451  SPDK bdev Controller (SPDK2               ) core 3:  7531.00 IO/s    13.28 secs/100000 ios
00:13:50.451  ========================================================
00:13:50.451  
00:13:50.451   06:24:07	-- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2'
00:13:50.451  Initializing NVMe Controllers
00:13:50.451  Attaching to /var/run/vfio-user/domain/vfio-user2/2
00:13:50.451  Attached to /var/run/vfio-user/domain/vfio-user2/2
00:13:50.451    Namespace ID: 1 size: 0GB
00:13:50.451  Initialization complete.
00:13:50.451  INFO: using host memory buffer for IO
00:13:50.451  Hello world!
00:13:50.451   06:24:07	-- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2'
00:13:51.830  Initializing NVMe Controllers
00:13:51.830  Attaching to /var/run/vfio-user/domain/vfio-user2/2
00:13:51.830  Attached to /var/run/vfio-user/domain/vfio-user2/2
00:13:51.830  Initialization complete. Launching workers.
00:13:51.830  submit (in ns)   avg, min, max =   8217.1,   3245.5, 4043962.7
00:13:51.830  complete (in ns) avg, min, max =  23733.4,   1889.1, 4493449.1
00:13:51.830  
00:13:51.830  Submit histogram
00:13:51.830  ================
00:13:51.830         Range in us     Cumulative     Count
00:13:51.830      3.244 -     3.258:    0.0265%  (        4)
00:13:51.830      3.258 -     3.273:    0.0661%  (        6)
00:13:51.830      3.273 -     3.287:    2.8906%  (      427)
00:13:51.830      3.287 -     3.302:   11.9659%  (     1372)
00:13:51.830      3.302 -     3.316:   22.3971%  (     1577)
00:13:51.830      3.316 -     3.331:   31.2872%  (     1344)
00:13:51.830      3.331 -     3.345:   36.1093%  (      729)
00:13:51.830      3.345 -     3.360:   41.0702%  (      750)
00:13:51.830      3.360 -     3.375:   47.8965%  (     1032)
00:13:51.830      3.375 -     3.389:   54.5641%  (     1008)
00:13:51.830      3.389 -     3.404:   60.2130%  (      854)
00:13:51.830      3.404 -     3.418:   63.5931%  (      511)
00:13:51.830      3.418 -     3.433:   66.0934%  (      378)
00:13:51.830      3.433 -     3.447:   67.8661%  (      268)
00:13:51.830      3.447 -     3.462:   70.2540%  (      361)
00:13:51.830      3.462 -     3.476:   72.4170%  (      327)
00:13:51.830      3.476 -     3.491:   75.1025%  (      406)
00:13:51.830      3.491 -     3.505:   76.7099%  (      243)
00:13:51.830      3.505 -     3.520:   77.7153%  (      152)
00:13:51.830      3.520 -     3.535:   78.6877%  (      147)
00:13:51.830      3.535 -     3.549:   79.5806%  (      135)
00:13:51.830      3.549 -     3.564:   80.7977%  (      184)
00:13:51.830      3.564 -     3.578:   81.7899%  (      150)
00:13:51.830      3.578 -     3.593:   82.6101%  (      124)
00:13:51.830      3.593 -     3.607:   83.4303%  (      124)
00:13:51.830      3.607 -     3.622:   84.0587%  (       95)
00:13:51.830      3.622 -     3.636:   84.9583%  (      136)
00:13:51.830      3.636 -     3.651:   86.7840%  (      276)
00:13:51.830      3.651 -     3.665:   88.0540%  (      192)
00:13:51.830      3.665 -     3.680:   89.1586%  (      167)
00:13:51.830      3.680 -     3.695:   90.3956%  (      187)
00:13:51.830      3.695 -     3.709:   91.1959%  (      121)
00:13:51.830      3.709 -     3.724:   92.1154%  (      139)
00:13:51.830      3.724 -     3.753:   93.3192%  (      182)
00:13:51.830      3.753 -     3.782:   94.4239%  (      167)
00:13:51.830      3.782 -     3.811:   95.4690%  (      158)
00:13:51.830      3.811 -     3.840:   96.3752%  (      137)
00:13:51.830      3.840 -     3.869:   96.8713%  (       75)
00:13:51.830      3.869 -     3.898:   97.1623%  (       44)
00:13:51.830      3.898 -     3.927:   97.4401%  (       42)
00:13:51.830      3.927 -     3.956:   97.5724%  (       20)
00:13:51.830      3.956 -     3.985:   97.6716%  (       15)
00:13:51.830      3.985 -     4.015:   97.8172%  (       22)
00:13:51.830      4.015 -     4.044:   97.9098%  (       14)
00:13:51.830      4.044 -     4.073:   97.9561%  (        7)
00:13:51.830      4.073 -     4.102:   97.9892%  (        5)
00:13:51.830      4.102 -     4.131:   98.0818%  (       14)
00:13:51.830      4.131 -     4.160:   98.1677%  (       13)
00:13:51.830      4.160 -     4.189:   98.2074%  (        6)
00:13:51.830      4.189 -     4.218:   98.2537%  (        7)
00:13:51.830      4.218 -     4.247:   98.2736%  (        3)
00:13:51.830      4.247 -     4.276:   98.3133%  (        6)
00:13:51.830      4.276 -     4.305:   98.3530%  (        6)
00:13:51.830      4.305 -     4.335:   98.4059%  (        8)
00:13:51.830      4.335 -     4.364:   98.4389%  (        5)
00:13:51.830      4.364 -     4.393:   98.4786%  (        6)
00:13:51.830      4.393 -     4.422:   98.5382%  (        9)
00:13:51.830      4.422 -     4.451:   98.5779%  (        6)
00:13:51.830      4.451 -     4.480:   98.6109%  (        5)
00:13:51.830      4.480 -     4.509:   98.6506%  (        6)
00:13:51.830      4.509 -     4.538:   98.6837%  (        5)
00:13:51.830      4.538 -     4.567:   98.7101%  (        4)
00:13:51.830      4.567 -     4.596:   98.7300%  (        3)
00:13:51.830      4.596 -     4.625:   98.7498%  (        3)
00:13:51.830      4.625 -     4.655:   98.7697%  (        3)
00:13:51.830      4.655 -     4.684:   98.7895%  (        3)
00:13:51.830      4.684 -     4.713:   98.8292%  (        6)
00:13:51.830      4.713 -     4.742:   98.8358%  (        1)
00:13:51.830      4.742 -     4.771:   98.8424%  (        1)
00:13:51.830      4.771 -     4.800:   98.8491%  (        1)
00:13:51.830      5.062 -     5.091:   98.8557%  (        1)
00:13:51.830      5.207 -     5.236:   98.8623%  (        1)
00:13:51.830      5.236 -     5.265:   98.8689%  (        1)
00:13:51.830      5.324 -     5.353:   98.8755%  (        1)
00:13:51.830      5.411 -     5.440:   98.8821%  (        1)
00:13:51.830      5.644 -     5.673:   98.8887%  (        1)
00:13:51.830      5.876 -     5.905:   98.8954%  (        1)
00:13:51.830      6.167 -     6.196:   98.9020%  (        1)
00:13:51.830      6.342 -     6.371:   98.9218%  (        3)
00:13:51.830      6.545 -     6.575:   98.9284%  (        1)
00:13:51.830      6.778 -     6.807:   98.9417%  (        2)
00:13:51.830      8.029 -     8.087:   98.9483%  (        1)
00:13:51.830      8.262 -     8.320:   98.9615%  (        2)
00:13:51.830      8.320 -     8.378:   98.9681%  (        1)
00:13:51.831      8.436 -     8.495:   98.9747%  (        1)
00:13:51.831      8.495 -     8.553:   98.9880%  (        2)
00:13:51.831      8.553 -     8.611:   99.0012%  (        2)
00:13:51.831      8.785 -     8.844:   99.0210%  (        3)
00:13:51.831      8.902 -     8.960:   99.0276%  (        1)
00:13:51.831      9.135 -     9.193:   99.0343%  (        1)
00:13:51.831      9.193 -     9.251:   99.0409%  (        1)
00:13:51.831      9.251 -     9.309:   99.0541%  (        2)
00:13:51.831      9.309 -     9.367:   99.0607%  (        1)
00:13:51.831      9.367 -     9.425:   99.0673%  (        1)
00:13:51.831      9.425 -     9.484:   99.0872%  (        3)
00:13:51.831      9.600 -     9.658:   99.1004%  (        2)
00:13:51.831      9.658 -     9.716:   99.1070%  (        1)
00:13:51.831      9.833 -     9.891:   99.1203%  (        2)
00:13:51.831      9.891 -     9.949:   99.1335%  (        2)
00:13:51.831     10.240 -    10.298:   99.1401%  (        1)
00:13:51.831     10.415 -    10.473:   99.1467%  (        1)
00:13:51.831     10.647 -    10.705:   99.1533%  (        1)
00:13:51.831     10.822 -    10.880:   99.1599%  (        1)
00:13:51.831     13.789 -    13.847:   99.1666%  (        1)
00:13:51.831     14.371 -    14.429:   99.1732%  (        1)
00:13:51.831     14.836 -    14.895:   99.1798%  (        1)
00:13:51.831     16.640 -    16.756:   99.1930%  (        2)
00:13:51.831     17.571 -    17.687:   99.1996%  (        1)
00:13:51.831     17.687 -    17.804:   99.2062%  (        1)
00:13:51.831     17.804 -    17.920:   99.2525%  (        7)
00:13:51.831     17.920 -    18.036:   99.3055%  (        8)
00:13:51.831     18.036 -    18.153:   99.3584%  (        8)
00:13:51.831     18.153 -    18.269:   99.3915%  (        5)
00:13:51.831     18.269 -    18.385:   99.4774%  (       13)
00:13:51.831     18.385 -    18.502:   99.5039%  (        4)
00:13:51.831     18.502 -    18.618:   99.5304%  (        4)
00:13:51.831     18.618 -    18.735:   99.5370%  (        1)
00:13:51.831     18.735 -    18.851:   99.5700%  (        5)
00:13:51.831     18.851 -    18.967:   99.5767%  (        1)
00:13:51.831     18.967 -    19.084:   99.5899%  (        2)
00:13:51.831     19.084 -    19.200:   99.6230%  (        5)
00:13:51.831     19.200 -    19.316:   99.6362%  (        2)
00:13:51.831     19.316 -    19.433:   99.6825%  (        7)
00:13:51.831     19.433 -    19.549:   99.7288%  (        7)
00:13:51.831     19.549 -    19.665:   99.7685%  (        6)
00:13:51.831     19.665 -    19.782:   99.8016%  (        5)
00:13:51.831     19.782 -    19.898:   99.8082%  (        1)
00:13:51.831     19.898 -    20.015:   99.8214%  (        2)
00:13:51.831     20.596 -    20.713:   99.8280%  (        1)
00:13:51.831     20.713 -    20.829:   99.8346%  (        1)
00:13:51.831     21.295 -    21.411:   99.8412%  (        1)
00:13:51.831     22.225 -    22.342:   99.8479%  (        1)
00:13:51.831     24.902 -    25.018:   99.8545%  (        1)
00:13:51.831     26.065 -    26.182:   99.8611%  (        1)
00:13:51.831     29.324 -    29.440:   99.8743%  (        2)
00:13:51.831     30.487 -    30.720:   99.8809%  (        1)
00:13:51.831   3083.171 -  3098.065:   99.8876%  (        1)
00:13:51.831   3127.855 -  3142.749:   99.8942%  (        1)
00:13:51.831   3961.949 -  3991.738:   99.9074%  (        2)
00:13:51.831   3991.738 -  4021.527:   99.9735%  (       10)
00:13:51.831   4021.527 -  4051.316:  100.0000%  (        4)
00:13:51.831  
00:13:51.831  Complete histogram
00:13:51.831  ==================
00:13:51.831         Range in us     Cumulative     Count
00:13:51.831      1.876 -     1.891:    0.0463%  (        7)
00:13:51.831      1.891 -     1.905:    8.8041%  (     1324)
00:13:51.831      1.905 -     1.920:   40.1508%  (     4739)
00:13:51.831      1.920 -     1.935:   55.7547%  (     2359)
00:13:51.831      1.935 -     1.949:   58.1558%  (      363)
00:13:51.831      1.949 -     1.964:   61.3176%  (      478)
00:13:51.831      1.964 -     1.978:   72.6750%  (     1717)
00:13:51.831      1.978 -     1.993:   79.1507%  (      979)
00:13:51.831      1.993 -     2.007:   79.8386%  (      104)
00:13:51.831      2.007 -     2.022:   81.0888%  (      189)
00:13:51.831      2.022 -     2.036:   84.4424%  (      507)
00:13:51.831      2.036 -     2.051:   88.7353%  (      649)
00:13:51.831      2.051 -     2.065:   89.2975%  (       85)
00:13:51.831      2.065 -     2.080:   89.4827%  (       28)
00:13:51.831      2.080 -     2.095:   90.5808%  (      166)
00:13:51.831      2.095 -     2.109:   93.1142%  (      383)
00:13:51.831      2.109 -     2.124:   93.9542%  (      127)
00:13:51.831      2.124 -     2.138:   94.0667%  (       17)
00:13:51.831      2.138 -     2.153:   94.1593%  (       14)
00:13:51.831      2.153 -     2.167:   94.8604%  (      106)
00:13:51.831      2.167 -     2.182:   95.8791%  (      154)
00:13:51.831      2.182 -     2.196:   96.1966%  (       48)
00:13:51.831      2.196 -     2.211:   96.2826%  (       13)
00:13:51.831      2.211 -     2.225:   96.3024%  (        3)
00:13:51.831      2.225 -     2.240:   96.6530%  (       53)
00:13:51.831      2.240 -     2.255:   97.8701%  (      184)
00:13:51.831      2.255 -     2.269:   98.2934%  (       64)
00:13:51.831      2.269 -     2.284:   98.3397%  (        7)
00:13:51.831      2.284 -     2.298:   98.3530%  (        2)
00:13:51.831      2.298 -     2.313:   98.3993%  (        7)
00:13:51.831      2.313 -     2.327:   98.4389%  (        6)
00:13:51.831      2.327 -     2.342:   98.4852%  (        7)
00:13:51.831      2.342 -     2.356:   98.5183%  (        5)
00:13:51.831      2.356 -     2.371:   98.5580%  (        6)
00:13:51.831      2.371 -     2.385:   98.5712%  (        2)
00:13:51.831      2.385 -     2.400:   98.5845%  (        2)
00:13:51.831      2.400 -     2.415:   98.6242%  (        6)
00:13:51.831      2.415 -     2.429:   98.6440%  (        3)
00:13:51.831      2.429 -     2.444:   98.6506%  (        1)
00:13:51.831      2.444 -     2.458:   98.6638%  (        2)
00:13:51.831      2.458 -     2.473:   98.6705%  (        1)
00:13:51.831      2.531 -     2.545:   98.6771%  (        1)
00:13:51.831      2.575 -     2.589:   98.6903%  (        2)
00:13:51.831      2.662 -     2.676:   98.6969%  (        1)
00:13:51.831      3.389 -     3.404:   98.7035%  (        1)
00:13:51.831      3.709 -     3.724:   98.7101%  (        1)
00:13:51.831      3.753 -     3.782:   98.7168%  (        1)
00:13:51.831      3.782 -     3.811:   98.7234%  (        1)
00:13:51.831      3.840 -     3.869:   98.7432%  (        3)
00:13:51.831      3.869 -     3.898:   98.7631%  (        3)
00:13:51.831      3.927 -     3.956:   98.7697%  (        1)
00:13:51.831      3.956 -     3.985:   98.7829%  (        2)
00:13:51.831      3.985 -     4.015:   98.7895%  (        1)
00:13:51.831      4.015 -     4.044:   98.7961%  (        1)
00:13:51.831      4.044 -     4.073:   98.8028%  (        1)
00:13:51.831      4.073 -     4.102:   98.8094%  (        1)
00:13:51.831      4.131 -     4.160:   98.8160%  (        1)
00:13:51.831      4.189 -     4.218:   98.8226%  (        1)
00:13:51.831      4.218 -     4.247:   98.8292%  (        1)
00:13:51.831      4.305 -     4.335:   98.8358%  (        1)
00:13:51.831      4.364 -     4.393:   98.8491%  (        2)
00:13:51.831      4.480 -     4.509:   98.8557%  (        1)
00:13:51.831      6.575 -     6.604:   98.8623%  (        1)
00:13:51.831      6.720 -     6.749:   98.8689%  (        1)
00:13:51.831      6.749 -     6.778:   98.8755%  (        1)
00:13:51.831      6.778 -     6.807:   98.8821%  (        1)
00:13:51.831      6.865 -     6.895:   98.8887%  (        1)
00:13:51.831      6.895 -     6.924:   98.8954%  (        1)
00:13:51.831      6.924 -     6.953:   98.9020%  (        1)
00:13:51.831      6.982 -     7.011:   98.9086%  (        1)
00:13:51.831      7.040 -     7.069:   98.9218%  (        2)
00:13:51.831      7.127 -     7.156:   98.9284%  (        1)
00:13:51.831      7.447 -     7.505:   98.9350%  (        1)
00:13:51.831      7.564 -     7.622:   98.9417%  (        1)
00:13:51.831      7.680 -     7.738:   98.9483%  (        1)
00:13:51.831      8.087 -     8.145:   98.9615%  (        2)
00:13:51.831      8.378 -     8.436:   98.9681%  (        1)
00:13:51.831      8.727 -     8.785:   98.9747%  (        1)
00:13:51.831      8.960 -     9.018:   98.9813%  (        1)
00:13:51.831      9.135 -     9.193:   98.9880%  (        1)
00:13:51.831      9.484 -     9.542:   98.9946%  (        1)
00:13:51.831     10.007 -    10.065:   99.0012%  (        1)
00:13:51.831     11.113 -    11.171:   99.0078%  (        1)
00:13:51.831     12.858 -    12.916:   99.0144%  (        1)
00:13:51.831     13.149 -    13.207:   99.0210%  (        1)
00:13:51.831     13.324 -    13.382:   99.0276%  (        1)
00:13:51.831     13.498 -    13.556:   99.0343%  (        1)
00:13:51.831     16.291 -    16.407:   99.0806%  (        7)
00:13:51.831     16.407 -    16.524:   99.0938%  (        2)
00:13:51.831     16.524 -    16.640:   99.1136%  (        3)
00:13:51.831     16.640 -    16.756:   99.1467%  (        5)
00:13:51.831     16.756 -    16.873:   99.1599%  (        2)
00:13:51.831     16.873 -    16.989:   99.2129%  (        8)
00:13:51.831     16.989 -    17.105:   99.2327%  (        3)
00:13:51.831     17.222 -    17.338:   99.2525%  (        3)
00:13:51.831     17.338 -    17.455:   99.2658%  (        2)
00:13:51.831     17.571 -    17.687:   99.2724%  (        1)
00:13:51.831     17.687 -    17.804:   99.2790%  (        1)
00:13:51.831     17.804 -    17.920:   99.2988%  (        3)
00:13:51.831     17.920 -    18.036:   99.3584%  (        9)
00:13:51.831     18.036 -    18.153:   99.3848%  (        4)
00:13:51.831     18.153 -    18.269:   99.4113%  (        4)
00:13:51.831     18.269 -    18.385:   99.4179%  (        1)
00:13:51.831     19.782 -    19.898:   99.4245%  (        1)
00:13:51.831     23.855 -    23.971:   99.4311%  (        1)
00:13:51.831     24.204 -    24.320:   99.4378%  (        1)
00:13:51.831     28.975 -    29.091:   99.4444%  (        1)
00:13:51.831     29.789 -    30.022:   99.4510%  (        1)
00:13:51.831     47.244 -    47.476:   99.4576%  (        1)
00:13:51.831   3053.382 -  3068.276:   99.4642%  (        1)
00:13:51.831   3083.171 -  3098.065:   99.4708%  (        1)
00:13:51.831   3932.160 -  3961.949:   99.4774%  (        1)
00:13:51.831   3961.949 -  3991.738:   99.4973%  (        3)
00:13:51.831   3991.738 -  4021.527:   99.7553%  (       39)
00:13:51.831   4021.527 -  4051.316:   99.9802%  (       34)
00:13:51.831   4051.316 -  4081.105:   99.9934%  (        2)
00:13:51.831   4468.364 -  4498.153:  100.0000%  (        1)
00:13:51.831  
00:13:51.831   06:24:08	-- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2
00:13:51.831   06:24:08	-- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2
00:13:51.831   06:24:08	-- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2
00:13:51.831   06:24:08	-- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4
00:13:51.831   06:24:08	-- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems
00:13:52.090  [
00:13:52.090    {
00:13:52.090      "allow_any_host": true,
00:13:52.090      "hosts": [],
00:13:52.090      "listen_addresses": [],
00:13:52.090      "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:13:52.090      "subtype": "Discovery"
00:13:52.090    },
00:13:52.090    {
00:13:52.090      "allow_any_host": true,
00:13:52.090      "hosts": [],
00:13:52.090      "listen_addresses": [
00:13:52.090        {
00:13:52.090          "adrfam": "IPv4",
00:13:52.090          "traddr": "/var/run/vfio-user/domain/vfio-user1/1",
00:13:52.090          "transport": "VFIOUSER",
00:13:52.090          "trsvcid": "0",
00:13:52.090          "trtype": "VFIOUSER"
00:13:52.090        }
00:13:52.090      ],
00:13:52.090      "max_cntlid": 65519,
00:13:52.090      "max_namespaces": 32,
00:13:52.090      "min_cntlid": 1,
00:13:52.090      "model_number": "SPDK bdev Controller",
00:13:52.090      "namespaces": [
00:13:52.090        {
00:13:52.090          "bdev_name": "Malloc1",
00:13:52.090          "name": "Malloc1",
00:13:52.090          "nguid": "8FF1D1354DCE473EBA170C04A13C8B91",
00:13:52.090          "nsid": 1,
00:13:52.090          "uuid": "8ff1d135-4dce-473e-ba17-0c04a13c8b91"
00:13:52.090        },
00:13:52.090        {
00:13:52.090          "bdev_name": "Malloc3",
00:13:52.090          "name": "Malloc3",
00:13:52.090          "nguid": "536AF21BB4E94D2A81AC9964ACE681D1",
00:13:52.090          "nsid": 2,
00:13:52.090          "uuid": "536af21b-b4e9-4d2a-81ac-9964ace681d1"
00:13:52.090        }
00:13:52.090      ],
00:13:52.090      "nqn": "nqn.2019-07.io.spdk:cnode1",
00:13:52.090      "serial_number": "SPDK1",
00:13:52.090      "subtype": "NVMe"
00:13:52.090    },
00:13:52.090    {
00:13:52.090      "allow_any_host": true,
00:13:52.090      "hosts": [],
00:13:52.090      "listen_addresses": [
00:13:52.090        {
00:13:52.090          "adrfam": "IPv4",
00:13:52.090          "traddr": "/var/run/vfio-user/domain/vfio-user2/2",
00:13:52.090          "transport": "VFIOUSER",
00:13:52.090          "trsvcid": "0",
00:13:52.090          "trtype": "VFIOUSER"
00:13:52.090        }
00:13:52.090      ],
00:13:52.090      "max_cntlid": 65519,
00:13:52.090      "max_namespaces": 32,
00:13:52.090      "min_cntlid": 1,
00:13:52.090      "model_number": "SPDK bdev Controller",
00:13:52.090      "namespaces": [
00:13:52.090        {
00:13:52.090          "bdev_name": "Malloc2",
00:13:52.090          "name": "Malloc2",
00:13:52.090          "nguid": "E46778BD35044C4A99BC113EB5639C6C",
00:13:52.090          "nsid": 1,
00:13:52.090          "uuid": "e46778bd-3504-4c4a-99bc-113eb5639c6c"
00:13:52.090        }
00:13:52.090      ],
00:13:52.090      "nqn": "nqn.2019-07.io.spdk:cnode2",
00:13:52.090      "serial_number": "SPDK2",
00:13:52.090      "subtype": "NVMe"
00:13:52.090    }
00:13:52.090  ]
00:13:52.090   06:24:09	-- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file
00:13:52.090   06:24:09	-- target/nvmf_vfio_user.sh@34 -- # aerpid=71343
00:13:52.090   06:24:09	-- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file
00:13:52.090   06:24:09	-- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r '		trtype:VFIOUSER 		traddr:/var/run/vfio-user/domain/vfio-user2/2 		subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file
00:13:52.090   06:24:09	-- common/autotest_common.sh@1254 -- # local i=0
00:13:52.090   06:24:09	-- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:13:52.090   06:24:09	-- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']'
00:13:52.090   06:24:09	-- common/autotest_common.sh@1257 -- # i=1
00:13:52.090   06:24:09	-- common/autotest_common.sh@1258 -- # sleep 0.1
00:13:52.348   06:24:09	-- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:13:52.348   06:24:09	-- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']'
00:13:52.348   06:24:09	-- common/autotest_common.sh@1257 -- # i=2
00:13:52.348   06:24:09	-- common/autotest_common.sh@1258 -- # sleep 0.1
00:13:52.348   06:24:09	-- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:13:52.348   06:24:09	-- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:13:52.348   06:24:09	-- common/autotest_common.sh@1265 -- # return 0
00:13:52.348   06:24:09	-- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file
00:13:52.348   06:24:09	-- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4
00:13:52.607  Malloc4
00:13:52.607   06:24:09	-- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2
00:13:52.865   06:24:09	-- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems
00:13:53.124  Asynchronous Event Request test
00:13:53.124  Attaching to /var/run/vfio-user/domain/vfio-user2/2
00:13:53.124  Attached to /var/run/vfio-user/domain/vfio-user2/2
00:13:53.124  Registering asynchronous event callbacks...
00:13:53.124  Starting namespace attribute notice tests for all controllers...
00:13:53.124  /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00
00:13:53.124  aer_cb - Changed Namespace
00:13:53.124  Cleaning up...
00:13:53.124  [
00:13:53.124    {
00:13:53.124      "allow_any_host": true,
00:13:53.124      "hosts": [],
00:13:53.124      "listen_addresses": [],
00:13:53.124      "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:13:53.124      "subtype": "Discovery"
00:13:53.124    },
00:13:53.124    {
00:13:53.124      "allow_any_host": true,
00:13:53.124      "hosts": [],
00:13:53.124      "listen_addresses": [
00:13:53.124        {
00:13:53.124          "adrfam": "IPv4",
00:13:53.124          "traddr": "/var/run/vfio-user/domain/vfio-user1/1",
00:13:53.124          "transport": "VFIOUSER",
00:13:53.124          "trsvcid": "0",
00:13:53.124          "trtype": "VFIOUSER"
00:13:53.124        }
00:13:53.124      ],
00:13:53.124      "max_cntlid": 65519,
00:13:53.124      "max_namespaces": 32,
00:13:53.124      "min_cntlid": 1,
00:13:53.124      "model_number": "SPDK bdev Controller",
00:13:53.124      "namespaces": [
00:13:53.124        {
00:13:53.124          "bdev_name": "Malloc1",
00:13:53.124          "name": "Malloc1",
00:13:53.124          "nguid": "8FF1D1354DCE473EBA170C04A13C8B91",
00:13:53.124          "nsid": 1,
00:13:53.124          "uuid": "8ff1d135-4dce-473e-ba17-0c04a13c8b91"
00:13:53.124        },
00:13:53.124        {
00:13:53.124          "bdev_name": "Malloc3",
00:13:53.124          "name": "Malloc3",
00:13:53.124          "nguid": "536AF21BB4E94D2A81AC9964ACE681D1",
00:13:53.124          "nsid": 2,
00:13:53.124          "uuid": "536af21b-b4e9-4d2a-81ac-9964ace681d1"
00:13:53.124        }
00:13:53.124      ],
00:13:53.124      "nqn": "nqn.2019-07.io.spdk:cnode1",
00:13:53.124      "serial_number": "SPDK1",
00:13:53.124      "subtype": "NVMe"
00:13:53.124    },
00:13:53.124    {
00:13:53.124      "allow_any_host": true,
00:13:53.124      "hosts": [],
00:13:53.124      "listen_addresses": [
00:13:53.124        {
00:13:53.124          "adrfam": "IPv4",
00:13:53.124          "traddr": "/var/run/vfio-user/domain/vfio-user2/2",
00:13:53.124          "transport": "VFIOUSER",
00:13:53.124          "trsvcid": "0",
00:13:53.124          "trtype": "VFIOUSER"
00:13:53.124        }
00:13:53.124      ],
00:13:53.124      "max_cntlid": 65519,
00:13:53.124      "max_namespaces": 32,
00:13:53.124      "min_cntlid": 1,
00:13:53.124      "model_number": "SPDK bdev Controller",
00:13:53.124      "namespaces": [
00:13:53.124        {
00:13:53.124          "bdev_name": "Malloc2",
00:13:53.124          "name": "Malloc2",
00:13:53.124          "nguid": "E46778BD35044C4A99BC113EB5639C6C",
00:13:53.124          "nsid": 1,
00:13:53.124          "uuid": "e46778bd-3504-4c4a-99bc-113eb5639c6c"
00:13:53.124        },
00:13:53.124        {
00:13:53.124          "bdev_name": "Malloc4",
00:13:53.124          "name": "Malloc4",
00:13:53.124          "nguid": "AB807BC102714E7F85B3A9F092879F01",
00:13:53.124          "nsid": 2,
00:13:53.124          "uuid": "ab807bc1-0271-4e7f-85b3-a9f092879f01"
00:13:53.124        }
00:13:53.124      ],
00:13:53.124      "nqn": "nqn.2019-07.io.spdk:cnode2",
00:13:53.124      "serial_number": "SPDK2",
00:13:53.124      "subtype": "NVMe"
00:13:53.124    }
00:13:53.124  ]
00:13:53.124   06:24:10	-- target/nvmf_vfio_user.sh@44 -- # wait 71343
00:13:53.124   06:24:10	-- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user
00:13:53.124   06:24:10	-- target/nvmf_vfio_user.sh@95 -- # killprocess 70661
00:13:53.124   06:24:10	-- common/autotest_common.sh@936 -- # '[' -z 70661 ']'
00:13:53.124   06:24:10	-- common/autotest_common.sh@940 -- # kill -0 70661
00:13:53.124    06:24:10	-- common/autotest_common.sh@941 -- # uname
00:13:53.124   06:24:10	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:13:53.124    06:24:10	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70661
00:13:53.124  killing process with pid 70661
00:13:53.124   06:24:10	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:13:53.124   06:24:10	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:13:53.124   06:24:10	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 70661'
00:13:53.124   06:24:10	-- common/autotest_common.sh@955 -- # kill 70661
00:13:53.124  [2024-12-16 06:24:10.083780] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times
00:13:53.124   06:24:10	-- common/autotest_common.sh@960 -- # wait 70661
00:13:53.691   06:24:10	-- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user
00:13:53.691  Process pid: 71385
00:13:53.691   06:24:10	-- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT
00:13:53.691   06:24:10	-- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I'
00:13:53.691   06:24:10	-- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode
00:13:53.691   06:24:10	-- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I'
00:13:53.691   06:24:10	-- target/nvmf_vfio_user.sh@55 -- # nvmfpid=71385
00:13:53.691   06:24:10	-- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode
00:13:53.691   06:24:10	-- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 71385'
00:13:53.691   06:24:10	-- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT
00:13:53.691   06:24:10	-- target/nvmf_vfio_user.sh@60 -- # waitforlisten 71385
00:13:53.691   06:24:10	-- common/autotest_common.sh@829 -- # '[' -z 71385 ']'
00:13:53.691   06:24:10	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:53.691   06:24:10	-- common/autotest_common.sh@834 -- # local max_retries=100
00:13:53.691   06:24:10	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:53.691  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:53.691   06:24:10	-- common/autotest_common.sh@838 -- # xtrace_disable
00:13:53.691   06:24:10	-- common/autotest_common.sh@10 -- # set +x
00:13:53.691  [2024-12-16 06:24:10.458189] thread.c:2929:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:13:53.692  [2024-12-16 06:24:10.459142] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:13:53.692  [2024-12-16 06:24:10.459227] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:13:53.692  [2024-12-16 06:24:10.591534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:13:53.950  [2024-12-16 06:24:10.680365] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:13:53.950  [2024-12-16 06:24:10.680523] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:13:53.950  [2024-12-16 06:24:10.680537] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:13:53.950  [2024-12-16 06:24:10.680546] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:13:53.950  [2024-12-16 06:24:10.680697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:13:53.950  [2024-12-16 06:24:10.680873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:13:53.950  [2024-12-16 06:24:10.681389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:13:53.950  [2024-12-16 06:24:10.681433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:13:53.950  [2024-12-16 06:24:10.764915] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode.
00:13:53.950  [2024-12-16 06:24:10.775700] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode.
00:13:53.950  [2024-12-16 06:24:10.775808] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode.
00:13:53.950  [2024-12-16 06:24:10.776605] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:13:53.950  [2024-12-16 06:24:10.776769] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode.
00:13:54.516   06:24:11	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:13:54.516   06:24:11	-- common/autotest_common.sh@862 -- # return 0
00:13:54.516   06:24:11	-- target/nvmf_vfio_user.sh@62 -- # sleep 1
00:13:55.450   06:24:12	-- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I
00:13:55.708   06:24:12	-- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user
00:13:55.708    06:24:12	-- target/nvmf_vfio_user.sh@68 -- # seq 1 2
00:13:55.708   06:24:12	-- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES)
00:13:55.708   06:24:12	-- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1
00:13:55.708   06:24:12	-- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1
00:13:56.275  Malloc1
00:13:56.275   06:24:13	-- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1
00:13:56.534   06:24:13	-- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1
00:13:56.792   06:24:13	-- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0
00:13:57.050   06:24:13	-- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES)
00:13:57.050   06:24:13	-- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2
00:13:57.050   06:24:13	-- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2
00:13:57.308  Malloc2
00:13:57.308   06:24:14	-- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2
00:13:57.566   06:24:14	-- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2
00:13:57.824   06:24:14	-- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0
00:13:58.083   06:24:14	-- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user
00:13:58.083   06:24:14	-- target/nvmf_vfio_user.sh@95 -- # killprocess 71385
00:13:58.083   06:24:14	-- common/autotest_common.sh@936 -- # '[' -z 71385 ']'
00:13:58.083   06:24:14	-- common/autotest_common.sh@940 -- # kill -0 71385
00:13:58.083    06:24:14	-- common/autotest_common.sh@941 -- # uname
00:13:58.083   06:24:14	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:13:58.083    06:24:14	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71385
00:13:58.083  killing process with pid 71385
00:13:58.083   06:24:14	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:13:58.083   06:24:14	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:13:58.083   06:24:14	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 71385'
00:13:58.083   06:24:14	-- common/autotest_common.sh@955 -- # kill 71385
00:13:58.083   06:24:14	-- common/autotest_common.sh@960 -- # wait 71385
00:13:58.344   06:24:15	-- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user
00:13:58.344   06:24:15	-- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT
00:13:58.344  
00:13:58.344  real	0m55.264s
00:13:58.344  user	3m37.351s
00:13:58.344  sys	0m3.827s
00:13:58.344   06:24:15	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:58.344   06:24:15	-- common/autotest_common.sh@10 -- # set +x
00:13:58.344  ************************************
00:13:58.344  END TEST nvmf_vfio_user
00:13:58.344  ************************************
00:13:58.344   06:24:15	-- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp
00:13:58.344   06:24:15	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:13:58.344   06:24:15	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:58.344   06:24:15	-- common/autotest_common.sh@10 -- # set +x
00:13:58.344  ************************************
00:13:58.344  START TEST nvmf_vfio_user_nvme_compliance
00:13:58.344  ************************************
00:13:58.344   06:24:15	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp
00:13:58.344  * Looking for test storage...
00:13:58.604  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/compliance
00:13:58.604    06:24:15	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:13:58.604     06:24:15	-- common/autotest_common.sh@1690 -- # lcov --version
00:13:58.604     06:24:15	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:13:58.604    06:24:15	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:13:58.604    06:24:15	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:13:58.604    06:24:15	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:13:58.604    06:24:15	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:13:58.604    06:24:15	-- scripts/common.sh@335 -- # IFS=.-:
00:13:58.604    06:24:15	-- scripts/common.sh@335 -- # read -ra ver1
00:13:58.604    06:24:15	-- scripts/common.sh@336 -- # IFS=.-:
00:13:58.604    06:24:15	-- scripts/common.sh@336 -- # read -ra ver2
00:13:58.604    06:24:15	-- scripts/common.sh@337 -- # local 'op=<'
00:13:58.604    06:24:15	-- scripts/common.sh@339 -- # ver1_l=2
00:13:58.604    06:24:15	-- scripts/common.sh@340 -- # ver2_l=1
00:13:58.604    06:24:15	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:13:58.604    06:24:15	-- scripts/common.sh@343 -- # case "$op" in
00:13:58.604    06:24:15	-- scripts/common.sh@344 -- # : 1
00:13:58.604    06:24:15	-- scripts/common.sh@363 -- # (( v = 0 ))
00:13:58.604    06:24:15	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:58.604     06:24:15	-- scripts/common.sh@364 -- # decimal 1
00:13:58.604     06:24:15	-- scripts/common.sh@352 -- # local d=1
00:13:58.604     06:24:15	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:58.604     06:24:15	-- scripts/common.sh@354 -- # echo 1
00:13:58.604    06:24:15	-- scripts/common.sh@364 -- # ver1[v]=1
00:13:58.604     06:24:15	-- scripts/common.sh@365 -- # decimal 2
00:13:58.604     06:24:15	-- scripts/common.sh@352 -- # local d=2
00:13:58.604     06:24:15	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:58.604     06:24:15	-- scripts/common.sh@354 -- # echo 2
00:13:58.604    06:24:15	-- scripts/common.sh@365 -- # ver2[v]=2
00:13:58.604    06:24:15	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:13:58.604    06:24:15	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:13:58.604    06:24:15	-- scripts/common.sh@367 -- # return 0
00:13:58.604    06:24:15	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:58.604    06:24:15	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:13:58.604  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:58.604  		--rc genhtml_branch_coverage=1
00:13:58.604  		--rc genhtml_function_coverage=1
00:13:58.604  		--rc genhtml_legend=1
00:13:58.604  		--rc geninfo_all_blocks=1
00:13:58.604  		--rc geninfo_unexecuted_blocks=1
00:13:58.604  		
00:13:58.604  		'
00:13:58.604    06:24:15	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:13:58.604  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:58.604  		--rc genhtml_branch_coverage=1
00:13:58.604  		--rc genhtml_function_coverage=1
00:13:58.604  		--rc genhtml_legend=1
00:13:58.604  		--rc geninfo_all_blocks=1
00:13:58.604  		--rc geninfo_unexecuted_blocks=1
00:13:58.604  		
00:13:58.604  		'
00:13:58.604    06:24:15	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:13:58.604  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:58.604  		--rc genhtml_branch_coverage=1
00:13:58.604  		--rc genhtml_function_coverage=1
00:13:58.604  		--rc genhtml_legend=1
00:13:58.604  		--rc geninfo_all_blocks=1
00:13:58.604  		--rc geninfo_unexecuted_blocks=1
00:13:58.604  		
00:13:58.604  		'
00:13:58.604    06:24:15	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:13:58.604  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:58.604  		--rc genhtml_branch_coverage=1
00:13:58.604  		--rc genhtml_function_coverage=1
00:13:58.604  		--rc genhtml_legend=1
00:13:58.604  		--rc geninfo_all_blocks=1
00:13:58.604  		--rc geninfo_unexecuted_blocks=1
00:13:58.604  		
00:13:58.604  		'
00:13:58.604   06:24:15	-- compliance/compliance.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:13:58.604     06:24:15	-- nvmf/common.sh@7 -- # uname -s
00:13:58.604    06:24:15	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:13:58.604    06:24:15	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:13:58.604    06:24:15	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:13:58.604    06:24:15	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:13:58.604    06:24:15	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:13:58.604    06:24:15	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:13:58.604    06:24:15	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:13:58.604    06:24:15	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:13:58.604    06:24:15	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:13:58.604     06:24:15	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:13:58.604    06:24:15	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:13:58.604    06:24:15	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:13:58.604    06:24:15	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:13:58.604    06:24:15	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:13:58.604    06:24:15	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:13:58.604    06:24:15	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:13:58.604     06:24:15	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:13:58.604     06:24:15	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:13:58.604     06:24:15	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:13:58.605      06:24:15	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:58.605      06:24:15	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:58.605      06:24:15	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:58.605      06:24:15	-- paths/export.sh@5 -- # export PATH
00:13:58.605      06:24:15	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:58.605    06:24:15	-- nvmf/common.sh@46 -- # : 0
00:13:58.605    06:24:15	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:13:58.605    06:24:15	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:13:58.605    06:24:15	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:13:58.605    06:24:15	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:13:58.605    06:24:15	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:13:58.605    06:24:15	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:13:58.605    06:24:15	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:13:58.605    06:24:15	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:13:58.605   06:24:15	-- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64
00:13:58.605   06:24:15	-- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:13:58.605   06:24:15	-- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER
00:13:58.605   06:24:15	-- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER
00:13:58.605   06:24:15	-- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user
00:13:58.605   06:24:15	-- compliance/compliance.sh@20 -- # nvmfpid=71588
00:13:58.605  Process pid: 71588
00:13:58.605   06:24:15	-- compliance/compliance.sh@21 -- # echo 'Process pid: 71588'
00:13:58.605   06:24:15	-- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT
00:13:58.605   06:24:15	-- compliance/compliance.sh@24 -- # waitforlisten 71588
00:13:58.605   06:24:15	-- compliance/compliance.sh@19 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7
00:13:58.605   06:24:15	-- common/autotest_common.sh@829 -- # '[' -z 71588 ']'
00:13:58.605   06:24:15	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:58.605   06:24:15	-- common/autotest_common.sh@834 -- # local max_retries=100
00:13:58.605  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:58.605   06:24:15	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:58.605   06:24:15	-- common/autotest_common.sh@838 -- # xtrace_disable
00:13:58.605   06:24:15	-- common/autotest_common.sh@10 -- # set +x
00:13:58.605  [2024-12-16 06:24:15.499210] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:13:58.605  [2024-12-16 06:24:15.499310] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:13:58.863  [2024-12-16 06:24:15.642062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:13:58.863  [2024-12-16 06:24:15.729176] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:13:58.863  [2024-12-16 06:24:15.729330] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:13:58.863  [2024-12-16 06:24:15.729342] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:13:58.863  [2024-12-16 06:24:15.729350] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:13:58.863  [2024-12-16 06:24:15.729514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:13:58.863  [2024-12-16 06:24:15.729959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:13:58.863  [2024-12-16 06:24:15.729968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:13:59.796   06:24:16	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:13:59.796   06:24:16	-- common/autotest_common.sh@862 -- # return 0
00:13:59.796   06:24:16	-- compliance/compliance.sh@26 -- # sleep 1
00:14:00.730   06:24:17	-- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0
00:14:00.730   06:24:17	-- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user
00:14:00.730   06:24:17	-- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER
00:14:00.730   06:24:17	-- common/autotest_common.sh@561 -- # xtrace_disable
00:14:00.730   06:24:17	-- common/autotest_common.sh@10 -- # set +x
00:14:00.730   06:24:17	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:00.730   06:24:17	-- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user
00:14:00.730   06:24:17	-- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0
00:14:00.730   06:24:17	-- common/autotest_common.sh@561 -- # xtrace_disable
00:14:00.730   06:24:17	-- common/autotest_common.sh@10 -- # set +x
00:14:00.730  malloc0
00:14:00.730   06:24:17	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:00.730   06:24:17	-- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32
00:14:00.730   06:24:17	-- common/autotest_common.sh@561 -- # xtrace_disable
00:14:00.730   06:24:17	-- common/autotest_common.sh@10 -- # set +x
00:14:00.730   06:24:17	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:00.730   06:24:17	-- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0
00:14:00.730   06:24:17	-- common/autotest_common.sh@561 -- # xtrace_disable
00:14:00.730   06:24:17	-- common/autotest_common.sh@10 -- # set +x
00:14:00.730   06:24:17	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:00.730   06:24:17	-- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0
00:14:00.730   06:24:17	-- common/autotest_common.sh@561 -- # xtrace_disable
00:14:00.730   06:24:17	-- common/autotest_common.sh@10 -- # set +x
00:14:00.730   06:24:17	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:00.730   06:24:17	-- compliance/compliance.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0'
00:14:00.988  
00:14:00.988  
00:14:00.988       CUnit - A unit testing framework for C - Version 2.1-3
00:14:00.988       http://cunit.sourceforge.net/
00:14:00.988  
00:14:00.988  
00:14:00.988  Suite: nvme_compliance
00:14:00.988    Test: admin_identify_ctrlr_verify_dptr ...[2024-12-16 06:24:17.747028] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining
00:14:00.988  [2024-12-16 06:24:17.747095] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed
00:14:00.988  [2024-12-16 06:24:17.747104] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed
00:14:00.988  passed
00:14:00.988    Test: admin_identify_ctrlr_verify_fused ...passed
00:14:01.246    Test: admin_identify_ns ...[2024-12-16 06:24:17.983569] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0
00:14:01.246  [2024-12-16 06:24:17.991579] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295
00:14:01.246  passed
00:14:01.246    Test: admin_get_features_mandatory_features ...passed
00:14:01.246    Test: admin_get_features_optional_features ...passed
00:14:01.504    Test: admin_set_features_number_of_queues ...passed
00:14:01.777    Test: admin_get_log_page_mandatory_logs ...passed
00:14:01.777    Test: admin_get_log_page_with_lpo ...[2024-12-16 06:24:18.611585] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512)
00:14:01.777  passed
00:14:01.777    Test: fabric_property_get ...passed
00:14:02.066    Test: admin_delete_io_sq_use_admin_qid ...[2024-12-16 06:24:18.792025] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist
00:14:02.066  passed
00:14:02.066    Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-16 06:24:18.963587] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist
00:14:02.066  [2024-12-16 06:24:18.979591] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist
00:14:02.066  passed
00:14:02.324    Test: admin_delete_io_cq_use_admin_qid ...[2024-12-16 06:24:19.066835] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist
00:14:02.324  passed
00:14:02.324    Test: admin_delete_io_cq_delete_cq_first ...[2024-12-16 06:24:19.225543] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first
00:14:02.324  [2024-12-16 06:24:19.249504] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist
00:14:02.324  passed
00:14:02.582    Test: admin_create_io_cq_verify_iv_pc ...[2024-12-16 06:24:19.338627] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big
00:14:02.582  [2024-12-16 06:24:19.338768] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported
00:14:02.582  passed
00:14:02.582    Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-16 06:24:19.511617] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1
00:14:02.582  [2024-12-16 06:24:19.519588] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257
00:14:02.582  [2024-12-16 06:24:19.527615] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0
00:14:02.582  [2024-12-16 06:24:19.535602] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128
00:14:02.840  passed
00:14:02.840    Test: admin_create_io_sq_verify_pc ...[2024-12-16 06:24:19.666545] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported
00:14:02.840  passed
00:14:04.214    Test: admin_create_io_qp_max_qps ...[2024-12-16 06:24:20.867580] nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs
00:14:04.472  passed
00:14:04.730    Test: admin_create_io_sq_shared_cq ...[2024-12-16 06:24:21.475587] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first
00:14:04.730  passed
00:14:04.730  
00:14:04.730  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:14:04.730                suites      1      1    n/a      0        0
00:14:04.730                 tests     18     18     18      0        0
00:14:04.730               asserts    360    360    360      0      n/a
00:14:04.730  
00:14:04.730  Elapsed time =    1.553 seconds
00:14:04.730   06:24:21	-- compliance/compliance.sh@42 -- # killprocess 71588
00:14:04.730   06:24:21	-- common/autotest_common.sh@936 -- # '[' -z 71588 ']'
00:14:04.730   06:24:21	-- common/autotest_common.sh@940 -- # kill -0 71588
00:14:04.730    06:24:21	-- common/autotest_common.sh@941 -- # uname
00:14:04.730   06:24:21	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:14:04.730    06:24:21	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71588
00:14:04.730   06:24:21	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:14:04.730   06:24:21	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:14:04.730  killing process with pid 71588
00:14:04.730   06:24:21	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 71588'
00:14:04.730   06:24:21	-- common/autotest_common.sh@955 -- # kill 71588
00:14:04.730   06:24:21	-- common/autotest_common.sh@960 -- # wait 71588
00:14:04.989   06:24:21	-- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user
00:14:04.989   06:24:21	-- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT
00:14:04.989  
00:14:04.989  real	0m6.613s
00:14:04.989  user	0m18.533s
00:14:04.989  sys	0m0.526s
00:14:04.989  ************************************
00:14:04.989  END TEST nvmf_vfio_user_nvme_compliance
00:14:04.989  ************************************
00:14:04.989   06:24:21	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:14:04.989   06:24:21	-- common/autotest_common.sh@10 -- # set +x
00:14:04.989   06:24:21	-- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp
00:14:04.989   06:24:21	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:14:04.989   06:24:21	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:14:04.989   06:24:21	-- common/autotest_common.sh@10 -- # set +x
00:14:04.989  ************************************
00:14:04.989  START TEST nvmf_vfio_user_fuzz
00:14:04.989  ************************************
00:14:04.989   06:24:21	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp
00:14:05.248  * Looking for test storage...
00:14:05.248  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:14:05.248    06:24:21	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:14:05.248     06:24:21	-- common/autotest_common.sh@1690 -- # lcov --version
00:14:05.248     06:24:21	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:14:05.248    06:24:22	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:14:05.248    06:24:22	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:14:05.248    06:24:22	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:14:05.248    06:24:22	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:14:05.248    06:24:22	-- scripts/common.sh@335 -- # IFS=.-:
00:14:05.248    06:24:22	-- scripts/common.sh@335 -- # read -ra ver1
00:14:05.248    06:24:22	-- scripts/common.sh@336 -- # IFS=.-:
00:14:05.248    06:24:22	-- scripts/common.sh@336 -- # read -ra ver2
00:14:05.248    06:24:22	-- scripts/common.sh@337 -- # local 'op=<'
00:14:05.248    06:24:22	-- scripts/common.sh@339 -- # ver1_l=2
00:14:05.248    06:24:22	-- scripts/common.sh@340 -- # ver2_l=1
00:14:05.248    06:24:22	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:14:05.248    06:24:22	-- scripts/common.sh@343 -- # case "$op" in
00:14:05.248    06:24:22	-- scripts/common.sh@344 -- # : 1
00:14:05.248    06:24:22	-- scripts/common.sh@363 -- # (( v = 0 ))
00:14:05.248    06:24:22	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:05.248     06:24:22	-- scripts/common.sh@364 -- # decimal 1
00:14:05.248     06:24:22	-- scripts/common.sh@352 -- # local d=1
00:14:05.248     06:24:22	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:05.248     06:24:22	-- scripts/common.sh@354 -- # echo 1
00:14:05.248    06:24:22	-- scripts/common.sh@364 -- # ver1[v]=1
00:14:05.248     06:24:22	-- scripts/common.sh@365 -- # decimal 2
00:14:05.248     06:24:22	-- scripts/common.sh@352 -- # local d=2
00:14:05.248     06:24:22	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:05.248     06:24:22	-- scripts/common.sh@354 -- # echo 2
00:14:05.248    06:24:22	-- scripts/common.sh@365 -- # ver2[v]=2
00:14:05.248    06:24:22	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:14:05.248    06:24:22	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:14:05.248    06:24:22	-- scripts/common.sh@367 -- # return 0
00:14:05.248    06:24:22	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:05.248    06:24:22	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:14:05.248  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:05.248  		--rc genhtml_branch_coverage=1
00:14:05.248  		--rc genhtml_function_coverage=1
00:14:05.248  		--rc genhtml_legend=1
00:14:05.248  		--rc geninfo_all_blocks=1
00:14:05.248  		--rc geninfo_unexecuted_blocks=1
00:14:05.248  		
00:14:05.248  		'
00:14:05.248    06:24:22	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:14:05.248  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:05.248  		--rc genhtml_branch_coverage=1
00:14:05.248  		--rc genhtml_function_coverage=1
00:14:05.248  		--rc genhtml_legend=1
00:14:05.248  		--rc geninfo_all_blocks=1
00:14:05.248  		--rc geninfo_unexecuted_blocks=1
00:14:05.248  		
00:14:05.248  		'
00:14:05.248    06:24:22	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:14:05.248  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:05.248  		--rc genhtml_branch_coverage=1
00:14:05.248  		--rc genhtml_function_coverage=1
00:14:05.249  		--rc genhtml_legend=1
00:14:05.249  		--rc geninfo_all_blocks=1
00:14:05.249  		--rc geninfo_unexecuted_blocks=1
00:14:05.249  		
00:14:05.249  		'
00:14:05.249    06:24:22	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:14:05.249  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:05.249  		--rc genhtml_branch_coverage=1
00:14:05.249  		--rc genhtml_function_coverage=1
00:14:05.249  		--rc genhtml_legend=1
00:14:05.249  		--rc geninfo_all_blocks=1
00:14:05.249  		--rc geninfo_unexecuted_blocks=1
00:14:05.249  		
00:14:05.249  		'
00:14:05.249   06:24:22	-- target/vfio_user_fuzz.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:14:05.249     06:24:22	-- nvmf/common.sh@7 -- # uname -s
00:14:05.249    06:24:22	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:14:05.249    06:24:22	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:14:05.249    06:24:22	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:14:05.249    06:24:22	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:14:05.249    06:24:22	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:14:05.249    06:24:22	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:14:05.249    06:24:22	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:14:05.249    06:24:22	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:14:05.249    06:24:22	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:14:05.249     06:24:22	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:14:05.249    06:24:22	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:14:05.249    06:24:22	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:14:05.249    06:24:22	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:14:05.249    06:24:22	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:14:05.249    06:24:22	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:14:05.249    06:24:22	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:14:05.249     06:24:22	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:14:05.249     06:24:22	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:14:05.249     06:24:22	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:14:05.249      06:24:22	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:05.249      06:24:22	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:05.249      06:24:22	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:05.249      06:24:22	-- paths/export.sh@5 -- # export PATH
00:14:05.249      06:24:22	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:05.249    06:24:22	-- nvmf/common.sh@46 -- # : 0
00:14:05.249    06:24:22	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:14:05.249    06:24:22	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:14:05.249    06:24:22	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:14:05.249    06:24:22	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:14:05.249    06:24:22	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:14:05.249    06:24:22	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:14:05.249    06:24:22	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:14:05.249    06:24:22	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:14:05.249   06:24:22	-- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64
00:14:05.249   06:24:22	-- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512
00:14:05.249   06:24:22	-- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0
00:14:05.249   06:24:22	-- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user
00:14:05.249   06:24:22	-- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER
00:14:05.249   06:24:22	-- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER
00:14:05.249   06:24:22	-- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user
00:14:05.249   06:24:22	-- target/vfio_user_fuzz.sh@24 -- # nvmfpid=71749
00:14:05.249  Process pid: 71749
00:14:05.249   06:24:22	-- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 71749'
00:14:05.249   06:24:22	-- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT
00:14:05.249   06:24:22	-- target/vfio_user_fuzz.sh@28 -- # waitforlisten 71749
00:14:05.249   06:24:22	-- target/vfio_user_fuzz.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1
00:14:05.249   06:24:22	-- common/autotest_common.sh@829 -- # '[' -z 71749 ']'
00:14:05.249   06:24:22	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:05.249  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:05.249   06:24:22	-- common/autotest_common.sh@834 -- # local max_retries=100
00:14:05.249   06:24:22	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:05.249   06:24:22	-- common/autotest_common.sh@838 -- # xtrace_disable
00:14:05.249   06:24:22	-- common/autotest_common.sh@10 -- # set +x
00:14:06.625   06:24:23	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:14:06.625   06:24:23	-- common/autotest_common.sh@862 -- # return 0
00:14:06.625   06:24:23	-- target/vfio_user_fuzz.sh@30 -- # sleep 1
00:14:07.192   06:24:24	-- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER
00:14:07.451   06:24:24	-- common/autotest_common.sh@561 -- # xtrace_disable
00:14:07.451   06:24:24	-- common/autotest_common.sh@10 -- # set +x
00:14:07.451   06:24:24	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:07.451   06:24:24	-- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user
00:14:07.451   06:24:24	-- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0
00:14:07.451   06:24:24	-- common/autotest_common.sh@561 -- # xtrace_disable
00:14:07.451   06:24:24	-- common/autotest_common.sh@10 -- # set +x
00:14:07.451  malloc0
00:14:07.451   06:24:24	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:07.451   06:24:24	-- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk
00:14:07.451   06:24:24	-- common/autotest_common.sh@561 -- # xtrace_disable
00:14:07.451   06:24:24	-- common/autotest_common.sh@10 -- # set +x
00:14:07.451   06:24:24	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:07.451   06:24:24	-- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0
00:14:07.451   06:24:24	-- common/autotest_common.sh@561 -- # xtrace_disable
00:14:07.451   06:24:24	-- common/autotest_common.sh@10 -- # set +x
00:14:07.451   06:24:24	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:07.451   06:24:24	-- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0
00:14:07.451   06:24:24	-- common/autotest_common.sh@561 -- # xtrace_disable
00:14:07.451   06:24:24	-- common/autotest_common.sh@10 -- # set +x
00:14:07.451   06:24:24	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:07.451   06:24:24	-- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user'
00:14:07.451   06:24:24	-- target/vfio_user_fuzz.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a
00:14:07.709  Shutting down the fuzz application
00:14:07.709   06:24:24	-- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0
00:14:07.709   06:24:24	-- common/autotest_common.sh@561 -- # xtrace_disable
00:14:07.709   06:24:24	-- common/autotest_common.sh@10 -- # set +x
00:14:07.709   06:24:24	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:07.709   06:24:24	-- target/vfio_user_fuzz.sh@46 -- # killprocess 71749
00:14:07.709   06:24:24	-- common/autotest_common.sh@936 -- # '[' -z 71749 ']'
00:14:07.709   06:24:24	-- common/autotest_common.sh@940 -- # kill -0 71749
00:14:07.709    06:24:24	-- common/autotest_common.sh@941 -- # uname
00:14:07.709   06:24:24	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:14:07.709    06:24:24	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71749
00:14:07.709   06:24:24	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:14:07.709   06:24:24	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:14:07.709  killing process with pid 71749
00:14:07.709   06:24:24	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 71749'
00:14:07.709   06:24:24	-- common/autotest_common.sh@955 -- # kill 71749
00:14:07.709   06:24:24	-- common/autotest_common.sh@960 -- # wait 71749
00:14:07.968   06:24:24	-- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_log.txt /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_tgt_output.txt
00:14:07.968   06:24:24	-- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT
00:14:07.968  
00:14:07.968  real	0m3.018s
00:14:07.968  user	0m3.337s
00:14:07.968  sys	0m0.419s
00:14:07.968   06:24:24	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:14:07.968   06:24:24	-- common/autotest_common.sh@10 -- # set +x
00:14:07.968  ************************************
00:14:07.968  END TEST nvmf_vfio_user_fuzz
00:14:07.968  ************************************
00:14:08.228   06:24:24	-- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp
00:14:08.228   06:24:24	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:14:08.228   06:24:24	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:14:08.228   06:24:24	-- common/autotest_common.sh@10 -- # set +x
00:14:08.228  ************************************
00:14:08.228  START TEST nvmf_host_management
00:14:08.228  ************************************
00:14:08.228   06:24:24	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp
00:14:08.228  * Looking for test storage...
00:14:08.228  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:14:08.228    06:24:25	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:14:08.228     06:24:25	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:14:08.228     06:24:25	-- common/autotest_common.sh@1690 -- # lcov --version
00:14:08.228    06:24:25	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:14:08.228    06:24:25	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:14:08.228    06:24:25	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:14:08.228    06:24:25	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:14:08.228    06:24:25	-- scripts/common.sh@335 -- # IFS=.-:
00:14:08.228    06:24:25	-- scripts/common.sh@335 -- # read -ra ver1
00:14:08.228    06:24:25	-- scripts/common.sh@336 -- # IFS=.-:
00:14:08.228    06:24:25	-- scripts/common.sh@336 -- # read -ra ver2
00:14:08.228    06:24:25	-- scripts/common.sh@337 -- # local 'op=<'
00:14:08.228    06:24:25	-- scripts/common.sh@339 -- # ver1_l=2
00:14:08.228    06:24:25	-- scripts/common.sh@340 -- # ver2_l=1
00:14:08.228    06:24:25	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:14:08.228    06:24:25	-- scripts/common.sh@343 -- # case "$op" in
00:14:08.228    06:24:25	-- scripts/common.sh@344 -- # : 1
00:14:08.228    06:24:25	-- scripts/common.sh@363 -- # (( v = 0 ))
00:14:08.228    06:24:25	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:08.228     06:24:25	-- scripts/common.sh@364 -- # decimal 1
00:14:08.228     06:24:25	-- scripts/common.sh@352 -- # local d=1
00:14:08.228     06:24:25	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:08.228     06:24:25	-- scripts/common.sh@354 -- # echo 1
00:14:08.228    06:24:25	-- scripts/common.sh@364 -- # ver1[v]=1
00:14:08.228     06:24:25	-- scripts/common.sh@365 -- # decimal 2
00:14:08.228     06:24:25	-- scripts/common.sh@352 -- # local d=2
00:14:08.228     06:24:25	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:08.228     06:24:25	-- scripts/common.sh@354 -- # echo 2
00:14:08.228    06:24:25	-- scripts/common.sh@365 -- # ver2[v]=2
00:14:08.228    06:24:25	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:14:08.228    06:24:25	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:14:08.228    06:24:25	-- scripts/common.sh@367 -- # return 0
00:14:08.228    06:24:25	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:08.228    06:24:25	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:14:08.228  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:08.228  		--rc genhtml_branch_coverage=1
00:14:08.228  		--rc genhtml_function_coverage=1
00:14:08.228  		--rc genhtml_legend=1
00:14:08.228  		--rc geninfo_all_blocks=1
00:14:08.228  		--rc geninfo_unexecuted_blocks=1
00:14:08.228  		
00:14:08.228  		'
00:14:08.228    06:24:25	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:14:08.228  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:08.228  		--rc genhtml_branch_coverage=1
00:14:08.228  		--rc genhtml_function_coverage=1
00:14:08.228  		--rc genhtml_legend=1
00:14:08.228  		--rc geninfo_all_blocks=1
00:14:08.228  		--rc geninfo_unexecuted_blocks=1
00:14:08.228  		
00:14:08.228  		'
00:14:08.228    06:24:25	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:14:08.228  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:08.228  		--rc genhtml_branch_coverage=1
00:14:08.228  		--rc genhtml_function_coverage=1
00:14:08.228  		--rc genhtml_legend=1
00:14:08.228  		--rc geninfo_all_blocks=1
00:14:08.228  		--rc geninfo_unexecuted_blocks=1
00:14:08.228  		
00:14:08.228  		'
00:14:08.228    06:24:25	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:14:08.228  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:08.228  		--rc genhtml_branch_coverage=1
00:14:08.228  		--rc genhtml_function_coverage=1
00:14:08.228  		--rc genhtml_legend=1
00:14:08.228  		--rc geninfo_all_blocks=1
00:14:08.228  		--rc geninfo_unexecuted_blocks=1
00:14:08.228  		
00:14:08.228  		'
00:14:08.228   06:24:25	-- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:14:08.228     06:24:25	-- nvmf/common.sh@7 -- # uname -s
00:14:08.228    06:24:25	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:14:08.228    06:24:25	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:14:08.228    06:24:25	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:14:08.228    06:24:25	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:14:08.228    06:24:25	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:14:08.228    06:24:25	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:14:08.228    06:24:25	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:14:08.228    06:24:25	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:14:08.228    06:24:25	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:14:08.228     06:24:25	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:14:08.228    06:24:25	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:14:08.228    06:24:25	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:14:08.228    06:24:25	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:14:08.228    06:24:25	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:14:08.228    06:24:25	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:14:08.228    06:24:25	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:14:08.228     06:24:25	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:14:08.228     06:24:25	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:14:08.228     06:24:25	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:14:08.228      06:24:25	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:08.228      06:24:25	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:08.228      06:24:25	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:08.228      06:24:25	-- paths/export.sh@5 -- # export PATH
00:14:08.228      06:24:25	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:08.228    06:24:25	-- nvmf/common.sh@46 -- # : 0
00:14:08.228    06:24:25	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:14:08.228    06:24:25	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:14:08.228    06:24:25	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:14:08.228    06:24:25	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:14:08.228    06:24:25	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:14:08.228    06:24:25	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:14:08.228    06:24:25	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:14:08.228    06:24:25	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:14:08.228   06:24:25	-- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64
00:14:08.228   06:24:25	-- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:14:08.228   06:24:25	-- target/host_management.sh@104 -- # nvmftestinit
00:14:08.228   06:24:25	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:14:08.228   06:24:25	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:14:08.228   06:24:25	-- nvmf/common.sh@436 -- # prepare_net_devs
00:14:08.228   06:24:25	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:14:08.228   06:24:25	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:14:08.228   06:24:25	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:14:08.228   06:24:25	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:14:08.228    06:24:25	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:14:08.486   06:24:25	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:14:08.487   06:24:25	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:14:08.487   06:24:25	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:14:08.487   06:24:25	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:14:08.487   06:24:25	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:14:08.487   06:24:25	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:14:08.487   06:24:25	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:14:08.487   06:24:25	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:14:08.487   06:24:25	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:14:08.487   06:24:25	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:14:08.487   06:24:25	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:14:08.487   06:24:25	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:14:08.487   06:24:25	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:14:08.487   06:24:25	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:14:08.487   06:24:25	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:14:08.487   06:24:25	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:14:08.487   06:24:25	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:14:08.487   06:24:25	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:14:08.487   06:24:25	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:14:08.487   06:24:25	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:14:08.487  Cannot find device "nvmf_tgt_br"
00:14:08.487   06:24:25	-- nvmf/common.sh@154 -- # true
00:14:08.487   06:24:25	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:14:08.487  Cannot find device "nvmf_tgt_br2"
00:14:08.487   06:24:25	-- nvmf/common.sh@155 -- # true
00:14:08.487   06:24:25	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:14:08.487   06:24:25	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:14:08.487  Cannot find device "nvmf_tgt_br"
00:14:08.487   06:24:25	-- nvmf/common.sh@157 -- # true
00:14:08.487   06:24:25	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:14:08.487  Cannot find device "nvmf_tgt_br2"
00:14:08.487   06:24:25	-- nvmf/common.sh@158 -- # true
00:14:08.487   06:24:25	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:14:08.487   06:24:25	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:14:08.487   06:24:25	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:14:08.487  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:14:08.487   06:24:25	-- nvmf/common.sh@161 -- # true
00:14:08.487   06:24:25	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:14:08.487  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:14:08.487   06:24:25	-- nvmf/common.sh@162 -- # true
00:14:08.487   06:24:25	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:14:08.487   06:24:25	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:14:08.487   06:24:25	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:14:08.487   06:24:25	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:14:08.487   06:24:25	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:14:08.487   06:24:25	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:14:08.487   06:24:25	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:14:08.487   06:24:25	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:14:08.487   06:24:25	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:14:08.487   06:24:25	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:14:08.487   06:24:25	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:14:08.487   06:24:25	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:14:08.487   06:24:25	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:14:08.487   06:24:25	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:14:08.487   06:24:25	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:14:08.487   06:24:25	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:14:08.487   06:24:25	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:14:08.746   06:24:25	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:14:08.746   06:24:25	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:14:08.746   06:24:25	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:14:08.746   06:24:25	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:14:08.746   06:24:25	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:14:08.746   06:24:25	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:14:08.746   06:24:25	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:14:08.746  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:14:08.746  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms
00:14:08.746  
00:14:08.746  --- 10.0.0.2 ping statistics ---
00:14:08.746  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:14:08.746  rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms
00:14:08.746   06:24:25	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:14:08.746  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:14:08.746  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms
00:14:08.746  
00:14:08.746  --- 10.0.0.3 ping statistics ---
00:14:08.746  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:14:08.746  rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms
00:14:08.746   06:24:25	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:14:08.746  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:14:08.746  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms
00:14:08.746  
00:14:08.746  --- 10.0.0.1 ping statistics ---
00:14:08.746  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:14:08.746  rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms
00:14:08.746   06:24:25	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:14:08.746   06:24:25	-- nvmf/common.sh@421 -- # return 0
00:14:08.746   06:24:25	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:14:08.746   06:24:25	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:14:08.746   06:24:25	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:14:08.746   06:24:25	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:14:08.746   06:24:25	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:14:08.746   06:24:25	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:14:08.746   06:24:25	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:14:08.746   06:24:25	-- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management
00:14:08.746   06:24:25	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:14:08.746   06:24:25	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:14:08.746   06:24:25	-- common/autotest_common.sh@10 -- # set +x
00:14:08.746  ************************************
00:14:08.746  START TEST nvmf_host_management
00:14:08.746  ************************************
00:14:08.746   06:24:25	-- common/autotest_common.sh@1114 -- # nvmf_host_management
00:14:08.746   06:24:25	-- target/host_management.sh@69 -- # starttarget
00:14:08.746   06:24:25	-- target/host_management.sh@16 -- # nvmfappstart -m 0x1E
00:14:08.746   06:24:25	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:14:08.746   06:24:25	-- common/autotest_common.sh@722 -- # xtrace_disable
00:14:08.746   06:24:25	-- common/autotest_common.sh@10 -- # set +x
00:14:08.746   06:24:25	-- nvmf/common.sh@469 -- # nvmfpid=71987
00:14:08.746   06:24:25	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E
00:14:08.746   06:24:25	-- nvmf/common.sh@470 -- # waitforlisten 71987
00:14:08.746   06:24:25	-- common/autotest_common.sh@829 -- # '[' -z 71987 ']'
00:14:08.746   06:24:25	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:08.746   06:24:25	-- common/autotest_common.sh@834 -- # local max_retries=100
00:14:08.746  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:08.746   06:24:25	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:08.746   06:24:25	-- common/autotest_common.sh@838 -- # xtrace_disable
00:14:08.746   06:24:25	-- common/autotest_common.sh@10 -- # set +x
00:14:08.746  [2024-12-16 06:24:25.635199] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:14:08.746  [2024-12-16 06:24:25.635321] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:14:09.004  [2024-12-16 06:24:25.778737] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:14:09.004  [2024-12-16 06:24:25.900522] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:14:09.004  [2024-12-16 06:24:25.900715] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:14:09.004  [2024-12-16 06:24:25.900732] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:14:09.004  [2024-12-16 06:24:25.900744] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:14:09.004  [2024-12-16 06:24:25.900942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:14:09.004  [2024-12-16 06:24:25.901677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:14:09.004  [2024-12-16 06:24:25.901840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4
00:14:09.004  [2024-12-16 06:24:25.901846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:14:09.937   06:24:26	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:14:09.937   06:24:26	-- common/autotest_common.sh@862 -- # return 0
00:14:09.937   06:24:26	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:14:09.937   06:24:26	-- common/autotest_common.sh@728 -- # xtrace_disable
00:14:09.937   06:24:26	-- common/autotest_common.sh@10 -- # set +x
00:14:09.937   06:24:26	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:14:09.937   06:24:26	-- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:14:09.937   06:24:26	-- common/autotest_common.sh@561 -- # xtrace_disable
00:14:09.937   06:24:26	-- common/autotest_common.sh@10 -- # set +x
00:14:09.937  [2024-12-16 06:24:26.699622] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:14:09.937   06:24:26	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:09.937   06:24:26	-- target/host_management.sh@20 -- # timing_enter create_subsystem
00:14:09.937   06:24:26	-- common/autotest_common.sh@722 -- # xtrace_disable
00:14:09.937   06:24:26	-- common/autotest_common.sh@10 -- # set +x
00:14:09.937   06:24:26	-- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt
00:14:09.937   06:24:26	-- target/host_management.sh@23 -- # cat
00:14:09.937   06:24:26	-- target/host_management.sh@30 -- # rpc_cmd
00:14:09.937   06:24:26	-- common/autotest_common.sh@561 -- # xtrace_disable
00:14:09.937   06:24:26	-- common/autotest_common.sh@10 -- # set +x
00:14:09.937  Malloc0
00:14:09.937  [2024-12-16 06:24:26.780983] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:14:09.937   06:24:26	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:09.937   06:24:26	-- target/host_management.sh@31 -- # timing_exit create_subsystems
00:14:09.937   06:24:26	-- common/autotest_common.sh@728 -- # xtrace_disable
00:14:09.937   06:24:26	-- common/autotest_common.sh@10 -- # set +x
00:14:09.937   06:24:26	-- target/host_management.sh@73 -- # perfpid=72061
00:14:09.937   06:24:26	-- target/host_management.sh@74 -- # waitforlisten 72061 /var/tmp/bdevperf.sock
00:14:09.937   06:24:26	-- common/autotest_common.sh@829 -- # '[' -z 72061 ']'
00:14:09.937   06:24:26	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:14:09.937   06:24:26	-- common/autotest_common.sh@834 -- # local max_retries=100
00:14:09.937   06:24:26	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:14:09.937  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:14:09.937   06:24:26	-- common/autotest_common.sh@838 -- # xtrace_disable
00:14:09.937   06:24:26	-- common/autotest_common.sh@10 -- # set +x
00:14:09.937   06:24:26	-- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10
00:14:09.937    06:24:26	-- target/host_management.sh@72 -- # gen_nvmf_target_json 0
00:14:09.937    06:24:26	-- nvmf/common.sh@520 -- # config=()
00:14:09.937    06:24:26	-- nvmf/common.sh@520 -- # local subsystem config
00:14:09.937    06:24:26	-- nvmf/common.sh@522 -- # for subsystem in "${@:-1}"
00:14:09.937    06:24:26	-- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF
00:14:09.937  {
00:14:09.937    "params": {
00:14:09.937      "name": "Nvme$subsystem",
00:14:09.937      "trtype": "$TEST_TRANSPORT",
00:14:09.937      "traddr": "$NVMF_FIRST_TARGET_IP",
00:14:09.937      "adrfam": "ipv4",
00:14:09.937      "trsvcid": "$NVMF_PORT",
00:14:09.937      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:14:09.937      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:14:09.937      "hdgst": ${hdgst:-false},
00:14:09.937      "ddgst": ${ddgst:-false}
00:14:09.937    },
00:14:09.937    "method": "bdev_nvme_attach_controller"
00:14:09.937  }
00:14:09.937  EOF
00:14:09.937  )")
00:14:09.937     06:24:26	-- nvmf/common.sh@542 -- # cat
00:14:09.937    06:24:26	-- nvmf/common.sh@544 -- # jq .
00:14:09.937     06:24:26	-- nvmf/common.sh@545 -- # IFS=,
00:14:09.937     06:24:26	-- nvmf/common.sh@546 -- # printf '%s\n' '{
00:14:09.937    "params": {
00:14:09.937      "name": "Nvme0",
00:14:09.937      "trtype": "tcp",
00:14:09.937      "traddr": "10.0.0.2",
00:14:09.937      "adrfam": "ipv4",
00:14:09.937      "trsvcid": "4420",
00:14:09.937      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:14:09.937      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:14:09.937      "hdgst": false,
00:14:09.937      "ddgst": false
00:14:09.937    },
00:14:09.937    "method": "bdev_nvme_attach_controller"
00:14:09.937  }'
00:14:09.937  [2024-12-16 06:24:26.910239] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:14:09.937  [2024-12-16 06:24:26.910400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72061 ]
00:14:10.194  [2024-12-16 06:24:27.056943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:10.194  [2024-12-16 06:24:27.158880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:14:10.452  Running I/O for 10 seconds...
00:14:11.017   06:24:27	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:14:11.017   06:24:27	-- common/autotest_common.sh@862 -- # return 0
00:14:11.017   06:24:27	-- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init
00:14:11.017   06:24:27	-- common/autotest_common.sh@561 -- # xtrace_disable
00:14:11.017   06:24:27	-- common/autotest_common.sh@10 -- # set +x
00:14:11.017   06:24:27	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:11.017   06:24:27	-- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:14:11.017   06:24:27	-- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1
00:14:11.017   06:24:27	-- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']'
00:14:11.017   06:24:27	-- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']'
00:14:11.017   06:24:27	-- target/host_management.sh@52 -- # local ret=1
00:14:11.017   06:24:27	-- target/host_management.sh@53 -- # local i
00:14:11.017   06:24:27	-- target/host_management.sh@54 -- # (( i = 10 ))
00:14:11.017   06:24:27	-- target/host_management.sh@54 -- # (( i != 0 ))
00:14:11.017    06:24:27	-- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1
00:14:11.017    06:24:27	-- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops'
00:14:11.017    06:24:27	-- common/autotest_common.sh@561 -- # xtrace_disable
00:14:11.017    06:24:27	-- common/autotest_common.sh@10 -- # set +x
00:14:11.277    06:24:27	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:11.277   06:24:28	-- target/host_management.sh@55 -- # read_io_count=2262
00:14:11.277   06:24:28	-- target/host_management.sh@58 -- # '[' 2262 -ge 100 ']'
00:14:11.277   06:24:28	-- target/host_management.sh@59 -- # ret=0
00:14:11.277   06:24:28	-- target/host_management.sh@60 -- # break
00:14:11.277   06:24:28	-- target/host_management.sh@64 -- # return 0
00:14:11.277   06:24:28	-- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0
00:14:11.277   06:24:28	-- common/autotest_common.sh@561 -- # xtrace_disable
00:14:11.277   06:24:28	-- common/autotest_common.sh@10 -- # set +x
00:14:11.277  [2024-12-16 06:24:28.038216] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038280] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038293] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038302] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038311] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038319] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038328] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038337] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038346] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038354] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038362] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038371] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038380] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038389] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038397] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038405] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038413] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038422] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038430] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038439] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038447] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038455] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038464] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038472] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038481] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038513] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038531] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.038540] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518910 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.039403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:11.277  [2024-12-16 06:24:28.039444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.277  [2024-12-16 06:24:28.039458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:11.277  [2024-12-16 06:24:28.039468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.277  [2024-12-16 06:24:28.039477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:11.277  [2024-12-16 06:24:28.039500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.277  [2024-12-16 06:24:28.039512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:11.277  [2024-12-16 06:24:28.039520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.277  [2024-12-16 06:24:28.039529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2dc0 is same with the state(5) to be set
00:14:11.277  [2024-12-16 06:24:28.040123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.277  [2024-12-16 06:24:28.040169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.277  [2024-12-16 06:24:28.040190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.277  [2024-12-16 06:24:28.040200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.277  [2024-12-16 06:24:28.040211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.277  [2024-12-16 06:24:28.040220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.277  [2024-12-16 06:24:28.040232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.277  [2024-12-16 06:24:28.040241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.277  [2024-12-16 06:24:28.040252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.277  [2024-12-16 06:24:28.040261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.277  [2024-12-16 06:24:28.040271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.277  [2024-12-16 06:24:28.040280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.277  [2024-12-16 06:24:28.040291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.277  [2024-12-16 06:24:28.040305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.277  [2024-12-16 06:24:28.040316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.277  [2024-12-16 06:24:28.040325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.277  [2024-12-16 06:24:28.040336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.277  [2024-12-16 06:24:28.040344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.277  [2024-12-16 06:24:28.040355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.277  [2024-12-16 06:24:28.040364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.277  [2024-12-16 06:24:28.040374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.277  [2024-12-16 06:24:28.040384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.277  [2024-12-16 06:24:28.040395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.277  [2024-12-16 06:24:28.040404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.277  [2024-12-16 06:24:28.040415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.277  [2024-12-16 06:24:28.040424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.277  [2024-12-16 06:24:28.040435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.277  [2024-12-16 06:24:28.040444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.277  [2024-12-16 06:24:28.040455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.277  [2024-12-16 06:24:28.040464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.277  [2024-12-16 06:24:28.040475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.277  [2024-12-16 06:24:28.040484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.277  [2024-12-16 06:24:28.040494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.277  [2024-12-16 06:24:28.040519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.277  [2024-12-16 06:24:28.040532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.277  [2024-12-16 06:24:28.040541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.277  [2024-12-16 06:24:28.040552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.277  [2024-12-16 06:24:28.040561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.040572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.040581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.040592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.040601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.040612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.040620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.040631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.040643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.040654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.040663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.040673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.040682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.040693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.040702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.040712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.040722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.040733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.040742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.040753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.040761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.040772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.040781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.040791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.040800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.040810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.040819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.040829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.040838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.040849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.040858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.040876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.040885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.040896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.040905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.040915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.040924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.040935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.040944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.040955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.040966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.040976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.040985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.040996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.041009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.041020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.041028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.041039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.041048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.041058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.041067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.041077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.041086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.041096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.041105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.041115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.041124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.041134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.041143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.041153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.041162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.041173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.041181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.041192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.041200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.041211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.041219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.041230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.041239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.041249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.041258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.041269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.041280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.041290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.041299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.041310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.041320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.041331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.041339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.278  [2024-12-16 06:24:28.041350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.278  [2024-12-16 06:24:28.041359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.279  [2024-12-16 06:24:28.041369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.279  [2024-12-16 06:24:28.041378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.279  [2024-12-16 06:24:28.041389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.279  [2024-12-16 06:24:28.041397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.279  [2024-12-16 06:24:28.041408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.279  [2024-12-16 06:24:28.041416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.279  [2024-12-16 06:24:28.041427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.279  [2024-12-16 06:24:28.041435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.279  [2024-12-16 06:24:28.041446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:14:11.279  [2024-12-16 06:24:28.041454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:11.279  [2024-12-16 06:24:28.041548] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24a6400 was disconnected and freed. reset controller.
00:14:11.279  [2024-12-16 06:24:28.042665] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller
00:14:11.279  task offset: 52352 on job bdev=Nvme0n1 fails
00:14:11.279  
00:14:11.279                                                                                                  Latency(us)
00:14:11.279  
[2024-12-16T06:24:28.255Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:11.279  
[2024-12-16T06:24:28.255Z]  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:14:11.279  
[2024-12-16T06:24:28.255Z]  Job: Nvme0n1 ended in about 0.71 seconds with error
00:14:11.279  	 Verification LBA range: start 0x0 length 0x400
00:14:11.279  	 Nvme0n1             :       0.71    3443.89     215.24      90.41     0.00   17812.80    1765.00   22282.24
00:14:11.279  
[2024-12-16T06:24:28.255Z]  ===================================================================================================================
00:14:11.279  
[2024-12-16T06:24:28.255Z]  Total                       :               3443.89     215.24      90.41     0.00   17812.80    1765.00   22282.24
00:14:11.279  [2024-12-16 06:24:28.044595] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:14:11.279  [2024-12-16 06:24:28.044630] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d2dc0 (9): Bad file descriptor
00:14:11.279   06:24:28	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:11.279   06:24:28	-- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0
00:14:11.279   06:24:28	-- common/autotest_common.sh@561 -- # xtrace_disable
00:14:11.279   06:24:28	-- common/autotest_common.sh@10 -- # set +x
00:14:11.279  [2024-12-16 06:24:28.053373] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:14:11.279   06:24:28	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:11.279   06:24:28	-- target/host_management.sh@87 -- # sleep 1
00:14:12.210   06:24:29	-- target/host_management.sh@91 -- # kill -9 72061
00:14:12.210  /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (72061) - No such process
00:14:12.210   06:24:29	-- target/host_management.sh@91 -- # true
00:14:12.210   06:24:29	-- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004
00:14:12.210   06:24:29	-- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1
00:14:12.210    06:24:29	-- target/host_management.sh@100 -- # gen_nvmf_target_json 0
00:14:12.210    06:24:29	-- nvmf/common.sh@520 -- # config=()
00:14:12.210    06:24:29	-- nvmf/common.sh@520 -- # local subsystem config
00:14:12.210    06:24:29	-- nvmf/common.sh@522 -- # for subsystem in "${@:-1}"
00:14:12.210    06:24:29	-- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF
00:14:12.210  {
00:14:12.210    "params": {
00:14:12.210      "name": "Nvme$subsystem",
00:14:12.210      "trtype": "$TEST_TRANSPORT",
00:14:12.210      "traddr": "$NVMF_FIRST_TARGET_IP",
00:14:12.210      "adrfam": "ipv4",
00:14:12.210      "trsvcid": "$NVMF_PORT",
00:14:12.210      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:14:12.210      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:14:12.210      "hdgst": ${hdgst:-false},
00:14:12.210      "ddgst": ${ddgst:-false}
00:14:12.210    },
00:14:12.210    "method": "bdev_nvme_attach_controller"
00:14:12.210  }
00:14:12.210  EOF
00:14:12.210  )")
00:14:12.210     06:24:29	-- nvmf/common.sh@542 -- # cat
00:14:12.210    06:24:29	-- nvmf/common.sh@544 -- # jq .
00:14:12.210     06:24:29	-- nvmf/common.sh@545 -- # IFS=,
00:14:12.210     06:24:29	-- nvmf/common.sh@546 -- # printf '%s\n' '{
00:14:12.210    "params": {
00:14:12.210      "name": "Nvme0",
00:14:12.210      "trtype": "tcp",
00:14:12.210      "traddr": "10.0.0.2",
00:14:12.210      "adrfam": "ipv4",
00:14:12.210      "trsvcid": "4420",
00:14:12.210      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:14:12.210      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:14:12.210      "hdgst": false,
00:14:12.210      "ddgst": false
00:14:12.210    },
00:14:12.210    "method": "bdev_nvme_attach_controller"
00:14:12.210  }'
00:14:12.210  [2024-12-16 06:24:29.112334] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:14:12.210  [2024-12-16 06:24:29.112431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72115 ]
00:14:12.468  [2024-12-16 06:24:29.244432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:12.468  [2024-12-16 06:24:29.327451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:14:12.727  Running I/O for 1 seconds...
00:14:13.664  
00:14:13.664                                                                                                  Latency(us)
00:14:13.664  
[2024-12-16T06:24:30.640Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:13.664  
[2024-12-16T06:24:30.640Z]  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:14:13.664  	 Verification LBA range: start 0x0 length 0x400
00:14:13.664  	 Nvme0n1             :       1.00    3687.69     230.48       0.00     0.00   17069.96     830.37   23116.33
00:14:13.664  
[2024-12-16T06:24:30.640Z]  ===================================================================================================================
00:14:13.664  
[2024-12-16T06:24:30.640Z]  Total                       :               3687.69     230.48       0.00     0.00   17069.96     830.37   23116.33
00:14:13.922   06:24:30	-- target/host_management.sh@101 -- # stoptarget
00:14:13.922   06:24:30	-- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state
00:14:13.922   06:24:30	-- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf
00:14:13.922   06:24:30	-- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt
00:14:13.922   06:24:30	-- target/host_management.sh@40 -- # nvmftestfini
00:14:13.922   06:24:30	-- nvmf/common.sh@476 -- # nvmfcleanup
00:14:13.922   06:24:30	-- nvmf/common.sh@116 -- # sync
00:14:13.922   06:24:30	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:14:13.922   06:24:30	-- nvmf/common.sh@119 -- # set +e
00:14:13.922   06:24:30	-- nvmf/common.sh@120 -- # for i in {1..20}
00:14:13.922   06:24:30	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:14:13.922  rmmod nvme_tcp
00:14:13.922  rmmod nvme_fabrics
00:14:13.922  rmmod nvme_keyring
00:14:13.922   06:24:30	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:14:13.922   06:24:30	-- nvmf/common.sh@123 -- # set -e
00:14:13.922   06:24:30	-- nvmf/common.sh@124 -- # return 0
00:14:13.922   06:24:30	-- nvmf/common.sh@477 -- # '[' -n 71987 ']'
00:14:13.922   06:24:30	-- nvmf/common.sh@478 -- # killprocess 71987
00:14:13.922   06:24:30	-- common/autotest_common.sh@936 -- # '[' -z 71987 ']'
00:14:13.922   06:24:30	-- common/autotest_common.sh@940 -- # kill -0 71987
00:14:13.922    06:24:30	-- common/autotest_common.sh@941 -- # uname
00:14:13.922   06:24:30	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:14:13.922    06:24:30	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71987
00:14:13.922   06:24:30	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:14:13.922   06:24:30	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:14:13.922  killing process with pid 71987
00:14:13.922   06:24:30	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 71987'
00:14:13.922   06:24:30	-- common/autotest_common.sh@955 -- # kill 71987
00:14:13.922   06:24:30	-- common/autotest_common.sh@960 -- # wait 71987
00:14:14.180  [2024-12-16 06:24:31.127733] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2
00:14:14.180   06:24:31	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:14:14.180   06:24:31	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:14:14.180   06:24:31	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:14:14.180   06:24:31	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:14:14.438   06:24:31	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:14:14.438   06:24:31	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:14:14.438   06:24:31	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:14:14.438    06:24:31	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:14:14.438   06:24:31	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:14:14.438  
00:14:14.438  real	0m5.618s
00:14:14.438  user	0m23.528s
00:14:14.438  sys	0m1.321s
00:14:14.438   06:24:31	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:14:14.438   06:24:31	-- common/autotest_common.sh@10 -- # set +x
00:14:14.438  ************************************
00:14:14.438  END TEST nvmf_host_management
00:14:14.438  ************************************
00:14:14.438   06:24:31	-- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT
00:14:14.438  
00:14:14.438  real	0m6.238s
00:14:14.438  user	0m23.725s
00:14:14.438  sys	0m1.599s
00:14:14.438   06:24:31	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:14:14.438   06:24:31	-- common/autotest_common.sh@10 -- # set +x
00:14:14.438  ************************************
00:14:14.438  END TEST nvmf_host_management
00:14:14.438  ************************************
00:14:14.438   06:24:31	-- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp
00:14:14.438   06:24:31	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:14:14.438   06:24:31	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:14:14.438   06:24:31	-- common/autotest_common.sh@10 -- # set +x
00:14:14.438  ************************************
00:14:14.438  START TEST nvmf_lvol
00:14:14.438  ************************************
00:14:14.438   06:24:31	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp
00:14:14.438  * Looking for test storage...
00:14:14.438  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:14:14.438    06:24:31	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:14:14.438     06:24:31	-- common/autotest_common.sh@1690 -- # lcov --version
00:14:14.438     06:24:31	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:14:14.697    06:24:31	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:14:14.697    06:24:31	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:14:14.697    06:24:31	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:14:14.697    06:24:31	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:14:14.697    06:24:31	-- scripts/common.sh@335 -- # IFS=.-:
00:14:14.697    06:24:31	-- scripts/common.sh@335 -- # read -ra ver1
00:14:14.697    06:24:31	-- scripts/common.sh@336 -- # IFS=.-:
00:14:14.697    06:24:31	-- scripts/common.sh@336 -- # read -ra ver2
00:14:14.697    06:24:31	-- scripts/common.sh@337 -- # local 'op=<'
00:14:14.697    06:24:31	-- scripts/common.sh@339 -- # ver1_l=2
00:14:14.697    06:24:31	-- scripts/common.sh@340 -- # ver2_l=1
00:14:14.697    06:24:31	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:14:14.697    06:24:31	-- scripts/common.sh@343 -- # case "$op" in
00:14:14.697    06:24:31	-- scripts/common.sh@344 -- # : 1
00:14:14.697    06:24:31	-- scripts/common.sh@363 -- # (( v = 0 ))
00:14:14.697    06:24:31	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:14.697     06:24:31	-- scripts/common.sh@364 -- # decimal 1
00:14:14.697     06:24:31	-- scripts/common.sh@352 -- # local d=1
00:14:14.697     06:24:31	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:14.697     06:24:31	-- scripts/common.sh@354 -- # echo 1
00:14:14.697    06:24:31	-- scripts/common.sh@364 -- # ver1[v]=1
00:14:14.697     06:24:31	-- scripts/common.sh@365 -- # decimal 2
00:14:14.697     06:24:31	-- scripts/common.sh@352 -- # local d=2
00:14:14.697     06:24:31	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:14.697     06:24:31	-- scripts/common.sh@354 -- # echo 2
00:14:14.697    06:24:31	-- scripts/common.sh@365 -- # ver2[v]=2
00:14:14.697    06:24:31	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:14:14.697    06:24:31	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:14:14.697    06:24:31	-- scripts/common.sh@367 -- # return 0
00:14:14.697    06:24:31	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:14.697    06:24:31	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:14:14.697  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:14.697  		--rc genhtml_branch_coverage=1
00:14:14.697  		--rc genhtml_function_coverage=1
00:14:14.697  		--rc genhtml_legend=1
00:14:14.697  		--rc geninfo_all_blocks=1
00:14:14.697  		--rc geninfo_unexecuted_blocks=1
00:14:14.697  		
00:14:14.697  		'
00:14:14.697    06:24:31	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:14:14.697  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:14.697  		--rc genhtml_branch_coverage=1
00:14:14.697  		--rc genhtml_function_coverage=1
00:14:14.697  		--rc genhtml_legend=1
00:14:14.697  		--rc geninfo_all_blocks=1
00:14:14.697  		--rc geninfo_unexecuted_blocks=1
00:14:14.697  		
00:14:14.697  		'
00:14:14.697    06:24:31	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:14:14.697  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:14.697  		--rc genhtml_branch_coverage=1
00:14:14.697  		--rc genhtml_function_coverage=1
00:14:14.697  		--rc genhtml_legend=1
00:14:14.697  		--rc geninfo_all_blocks=1
00:14:14.697  		--rc geninfo_unexecuted_blocks=1
00:14:14.697  		
00:14:14.697  		'
00:14:14.697    06:24:31	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:14:14.697  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:14.697  		--rc genhtml_branch_coverage=1
00:14:14.697  		--rc genhtml_function_coverage=1
00:14:14.697  		--rc genhtml_legend=1
00:14:14.697  		--rc geninfo_all_blocks=1
00:14:14.697  		--rc geninfo_unexecuted_blocks=1
00:14:14.697  		
00:14:14.697  		'
00:14:14.697   06:24:31	-- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:14:14.697     06:24:31	-- nvmf/common.sh@7 -- # uname -s
00:14:14.697    06:24:31	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:14:14.697    06:24:31	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:14:14.697    06:24:31	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:14:14.697    06:24:31	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:14:14.697    06:24:31	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:14:14.697    06:24:31	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:14:14.697    06:24:31	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:14:14.697    06:24:31	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:14:14.697    06:24:31	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:14:14.697     06:24:31	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:14:14.697    06:24:31	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:14:14.697    06:24:31	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:14:14.697    06:24:31	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:14:14.697    06:24:31	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:14:14.697    06:24:31	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:14:14.697    06:24:31	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:14:14.697     06:24:31	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:14:14.697     06:24:31	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:14:14.697     06:24:31	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:14:14.697      06:24:31	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:14.697      06:24:31	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:14.697      06:24:31	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:14.697      06:24:31	-- paths/export.sh@5 -- # export PATH
00:14:14.698      06:24:31	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:14.698    06:24:31	-- nvmf/common.sh@46 -- # : 0
00:14:14.698    06:24:31	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:14:14.698    06:24:31	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:14:14.698    06:24:31	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:14:14.698    06:24:31	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:14:14.698    06:24:31	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:14:14.698    06:24:31	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:14:14.698    06:24:31	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:14:14.698    06:24:31	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:14:14.698   06:24:31	-- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64
00:14:14.698   06:24:31	-- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:14:14.698   06:24:31	-- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20
00:14:14.698   06:24:31	-- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30
00:14:14.698   06:24:31	-- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:14.698   06:24:31	-- target/nvmf_lvol.sh@18 -- # nvmftestinit
00:14:14.698   06:24:31	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:14:14.698   06:24:31	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:14:14.698   06:24:31	-- nvmf/common.sh@436 -- # prepare_net_devs
00:14:14.698   06:24:31	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:14:14.698   06:24:31	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:14:14.698   06:24:31	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:14:14.698   06:24:31	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:14:14.698    06:24:31	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:14:14.698   06:24:31	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:14:14.698   06:24:31	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:14:14.698   06:24:31	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:14:14.698   06:24:31	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:14:14.698   06:24:31	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:14:14.698   06:24:31	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:14:14.698   06:24:31	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:14:14.698   06:24:31	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:14:14.698   06:24:31	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:14:14.698   06:24:31	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:14:14.698   06:24:31	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:14:14.698   06:24:31	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:14:14.698   06:24:31	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:14:14.698   06:24:31	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:14:14.698   06:24:31	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:14:14.698   06:24:31	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:14:14.698   06:24:31	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:14:14.698   06:24:31	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:14:14.698   06:24:31	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:14:14.698   06:24:31	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:14:14.698  Cannot find device "nvmf_tgt_br"
00:14:14.698   06:24:31	-- nvmf/common.sh@154 -- # true
00:14:14.698   06:24:31	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:14:14.698  Cannot find device "nvmf_tgt_br2"
00:14:14.698   06:24:31	-- nvmf/common.sh@155 -- # true
00:14:14.698   06:24:31	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:14:14.698   06:24:31	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:14:14.698  Cannot find device "nvmf_tgt_br"
00:14:14.698   06:24:31	-- nvmf/common.sh@157 -- # true
00:14:14.698   06:24:31	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:14:14.698  Cannot find device "nvmf_tgt_br2"
00:14:14.698   06:24:31	-- nvmf/common.sh@158 -- # true
00:14:14.698   06:24:31	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:14:14.698   06:24:31	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:14:14.698   06:24:31	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:14:14.698  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:14:14.698   06:24:31	-- nvmf/common.sh@161 -- # true
00:14:14.698   06:24:31	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:14:14.698  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:14:14.698   06:24:31	-- nvmf/common.sh@162 -- # true
00:14:14.698   06:24:31	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:14:14.698   06:24:31	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:14:14.698   06:24:31	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:14:14.698   06:24:31	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:14:14.698   06:24:31	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:14:14.956   06:24:31	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:14:14.956   06:24:31	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:14:14.956   06:24:31	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:14:14.956   06:24:31	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:14:14.956   06:24:31	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:14:14.956   06:24:31	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:14:14.956   06:24:31	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:14:14.956   06:24:31	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:14:14.956   06:24:31	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:14:14.956   06:24:31	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:14:14.956   06:24:31	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:14:14.956   06:24:31	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:14:14.956   06:24:31	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:14:14.956   06:24:31	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:14:14.956   06:24:31	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:14:14.957   06:24:31	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:14:14.957   06:24:31	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:14:14.957   06:24:31	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:14:14.957   06:24:31	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:14:14.957  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:14:14.957  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms
00:14:14.957  
00:14:14.957  --- 10.0.0.2 ping statistics ---
00:14:14.957  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:14:14.957  rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms
00:14:14.957   06:24:31	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:14:14.957  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:14:14.957  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms
00:14:14.957  
00:14:14.957  --- 10.0.0.3 ping statistics ---
00:14:14.957  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:14:14.957  rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms
00:14:14.957   06:24:31	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:14:14.957  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:14:14.957  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms
00:14:14.957  
00:14:14.957  --- 10.0.0.1 ping statistics ---
00:14:14.957  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:14:14.957  rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms
00:14:14.957   06:24:31	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:14:14.957   06:24:31	-- nvmf/common.sh@421 -- # return 0
00:14:14.957   06:24:31	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:14:14.957   06:24:31	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:14:14.957   06:24:31	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:14:14.957   06:24:31	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:14:14.957   06:24:31	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:14:14.957   06:24:31	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:14:14.957   06:24:31	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:14:14.957   06:24:31	-- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7
00:14:14.957   06:24:31	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:14:14.957   06:24:31	-- common/autotest_common.sh@722 -- # xtrace_disable
00:14:14.957   06:24:31	-- common/autotest_common.sh@10 -- # set +x
00:14:14.957  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:14.957   06:24:31	-- nvmf/common.sh@469 -- # nvmfpid=72355
00:14:14.957   06:24:31	-- nvmf/common.sh@470 -- # waitforlisten 72355
00:14:14.957   06:24:31	-- common/autotest_common.sh@829 -- # '[' -z 72355 ']'
00:14:14.957   06:24:31	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7
00:14:14.957   06:24:31	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:14.957   06:24:31	-- common/autotest_common.sh@834 -- # local max_retries=100
00:14:14.957   06:24:31	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:14.957   06:24:31	-- common/autotest_common.sh@838 -- # xtrace_disable
00:14:14.957   06:24:31	-- common/autotest_common.sh@10 -- # set +x
00:14:14.957  [2024-12-16 06:24:31.885168] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:14:14.957  [2024-12-16 06:24:31.886054] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:14:15.215  [2024-12-16 06:24:32.027848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:14:15.215  [2024-12-16 06:24:32.142087] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:14:15.215  [2024-12-16 06:24:32.142557] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:14:15.215  [2024-12-16 06:24:32.142622] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:14:15.215  [2024-12-16 06:24:32.142805] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:14:15.215  [2024-12-16 06:24:32.143030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:14:15.215  [2024-12-16 06:24:32.143157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:14:15.215  [2024-12-16 06:24:32.143162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:14:16.148   06:24:32	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:14:16.148   06:24:32	-- common/autotest_common.sh@862 -- # return 0
00:14:16.148   06:24:32	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:14:16.148   06:24:32	-- common/autotest_common.sh@728 -- # xtrace_disable
00:14:16.148   06:24:32	-- common/autotest_common.sh@10 -- # set +x
00:14:16.148   06:24:32	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:14:16.148   06:24:32	-- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:14:16.406  [2024-12-16 06:24:33.202693] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:14:16.406    06:24:33	-- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:14:16.664   06:24:33	-- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 '
00:14:16.664    06:24:33	-- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:14:16.922   06:24:33	-- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1
00:14:16.922   06:24:33	-- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1'
00:14:17.180    06:24:34	-- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs
00:14:17.438   06:24:34	-- target/nvmf_lvol.sh@29 -- # lvs=f64f7dfe-cec4-4c46-8b44-b22a3a10cc8c
00:14:17.438    06:24:34	-- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f64f7dfe-cec4-4c46-8b44-b22a3a10cc8c lvol 20
00:14:17.696   06:24:34	-- target/nvmf_lvol.sh@32 -- # lvol=393bc028-3bdb-4231-9cd5-dbd19af015c5
00:14:17.696   06:24:34	-- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:14:17.954   06:24:34	-- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 393bc028-3bdb-4231-9cd5-dbd19af015c5
00:14:18.214   06:24:35	-- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:14:18.473  [2024-12-16 06:24:35.358741] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:14:18.473   06:24:35	-- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:14:18.730   06:24:35	-- target/nvmf_lvol.sh@42 -- # perf_pid=72508
00:14:18.730   06:24:35	-- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18
00:14:18.730   06:24:35	-- target/nvmf_lvol.sh@44 -- # sleep 1
00:14:19.664    06:24:36	-- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 393bc028-3bdb-4231-9cd5-dbd19af015c5 MY_SNAPSHOT
00:14:20.230   06:24:36	-- target/nvmf_lvol.sh@47 -- # snapshot=6419d79a-1581-479c-add1-d6d95ed4ba10
00:14:20.230   06:24:36	-- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 393bc028-3bdb-4231-9cd5-dbd19af015c5 30
00:14:20.488    06:24:37	-- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 6419d79a-1581-479c-add1-d6d95ed4ba10 MY_CLONE
00:14:20.746   06:24:37	-- target/nvmf_lvol.sh@49 -- # clone=590e4709-a529-4aeb-95e6-53b8b5565370
00:14:20.746   06:24:37	-- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 590e4709-a529-4aeb-95e6-53b8b5565370
00:14:21.313   06:24:38	-- target/nvmf_lvol.sh@53 -- # wait 72508
00:14:29.424  Initializing NVMe Controllers
00:14:29.424  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0
00:14:29.424  Controller IO queue size 128, less than required.
00:14:29.424  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:14:29.424  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3
00:14:29.424  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4
00:14:29.424  Initialization complete. Launching workers.
00:14:29.424  ========================================================
00:14:29.424                                                                                                               Latency(us)
00:14:29.424  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:14:29.424  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core  3:    9623.20      37.59   13306.26     537.00   91701.21
00:14:29.424  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core  4:    9763.30      38.14   13121.49    3350.50   69619.78
00:14:29.424  ========================================================
00:14:29.424  Total                                                                    :   19386.49      75.73   13213.21     537.00   91701.21
00:14:29.424  
00:14:29.424   06:24:45	-- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:14:29.424   06:24:46	-- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 393bc028-3bdb-4231-9cd5-dbd19af015c5
00:14:29.682   06:24:46	-- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f64f7dfe-cec4-4c46-8b44-b22a3a10cc8c
00:14:29.941   06:24:46	-- target/nvmf_lvol.sh@60 -- # rm -f
00:14:29.941   06:24:46	-- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT
00:14:29.941   06:24:46	-- target/nvmf_lvol.sh@64 -- # nvmftestfini
00:14:29.941   06:24:46	-- nvmf/common.sh@476 -- # nvmfcleanup
00:14:29.941   06:24:46	-- nvmf/common.sh@116 -- # sync
00:14:29.941   06:24:46	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:14:29.941   06:24:46	-- nvmf/common.sh@119 -- # set +e
00:14:29.941   06:24:46	-- nvmf/common.sh@120 -- # for i in {1..20}
00:14:29.941   06:24:46	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:14:29.941  rmmod nvme_tcp
00:14:29.941  rmmod nvme_fabrics
00:14:29.941  rmmod nvme_keyring
00:14:29.941   06:24:46	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:14:29.941   06:24:46	-- nvmf/common.sh@123 -- # set -e
00:14:29.941   06:24:46	-- nvmf/common.sh@124 -- # return 0
00:14:29.941   06:24:46	-- nvmf/common.sh@477 -- # '[' -n 72355 ']'
00:14:29.941   06:24:46	-- nvmf/common.sh@478 -- # killprocess 72355
00:14:29.941   06:24:46	-- common/autotest_common.sh@936 -- # '[' -z 72355 ']'
00:14:29.941   06:24:46	-- common/autotest_common.sh@940 -- # kill -0 72355
00:14:29.941    06:24:46	-- common/autotest_common.sh@941 -- # uname
00:14:29.941   06:24:46	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:14:29.941    06:24:46	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72355
00:14:29.941  killing process with pid 72355
00:14:29.941   06:24:46	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:14:29.941   06:24:46	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:14:29.941   06:24:46	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 72355'
00:14:29.941   06:24:46	-- common/autotest_common.sh@955 -- # kill 72355
00:14:29.941   06:24:46	-- common/autotest_common.sh@960 -- # wait 72355
00:14:30.199   06:24:47	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:14:30.199   06:24:47	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:14:30.199   06:24:47	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:14:30.199   06:24:47	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:14:30.199   06:24:47	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:14:30.199   06:24:47	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:14:30.199   06:24:47	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:14:30.199    06:24:47	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:14:30.199   06:24:47	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:14:30.199  ************************************
00:14:30.199  END TEST nvmf_lvol
00:14:30.199  ************************************
00:14:30.199  
00:14:30.199  real	0m15.815s
00:14:30.199  user	1m5.756s
00:14:30.199  sys	0m3.850s
00:14:30.199   06:24:47	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:14:30.199   06:24:47	-- common/autotest_common.sh@10 -- # set +x
00:14:30.199   06:24:47	-- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp
00:14:30.199   06:24:47	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:14:30.199   06:24:47	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:14:30.199   06:24:47	-- common/autotest_common.sh@10 -- # set +x
00:14:30.199  ************************************
00:14:30.199  START TEST nvmf_lvs_grow
00:14:30.199  ************************************
00:14:30.199   06:24:47	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp
00:14:30.459  * Looking for test storage...
00:14:30.459  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:14:30.459    06:24:47	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:14:30.459     06:24:47	-- common/autotest_common.sh@1690 -- # lcov --version
00:14:30.459     06:24:47	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:14:30.459    06:24:47	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:14:30.459    06:24:47	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:14:30.459    06:24:47	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:14:30.459    06:24:47	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:14:30.459    06:24:47	-- scripts/common.sh@335 -- # IFS=.-:
00:14:30.459    06:24:47	-- scripts/common.sh@335 -- # read -ra ver1
00:14:30.459    06:24:47	-- scripts/common.sh@336 -- # IFS=.-:
00:14:30.459    06:24:47	-- scripts/common.sh@336 -- # read -ra ver2
00:14:30.459    06:24:47	-- scripts/common.sh@337 -- # local 'op=<'
00:14:30.459    06:24:47	-- scripts/common.sh@339 -- # ver1_l=2
00:14:30.459    06:24:47	-- scripts/common.sh@340 -- # ver2_l=1
00:14:30.459    06:24:47	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:14:30.459    06:24:47	-- scripts/common.sh@343 -- # case "$op" in
00:14:30.459    06:24:47	-- scripts/common.sh@344 -- # : 1
00:14:30.459    06:24:47	-- scripts/common.sh@363 -- # (( v = 0 ))
00:14:30.459    06:24:47	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:30.459     06:24:47	-- scripts/common.sh@364 -- # decimal 1
00:14:30.459     06:24:47	-- scripts/common.sh@352 -- # local d=1
00:14:30.459     06:24:47	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:30.459     06:24:47	-- scripts/common.sh@354 -- # echo 1
00:14:30.459    06:24:47	-- scripts/common.sh@364 -- # ver1[v]=1
00:14:30.459     06:24:47	-- scripts/common.sh@365 -- # decimal 2
00:14:30.459     06:24:47	-- scripts/common.sh@352 -- # local d=2
00:14:30.459     06:24:47	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:30.459     06:24:47	-- scripts/common.sh@354 -- # echo 2
00:14:30.459    06:24:47	-- scripts/common.sh@365 -- # ver2[v]=2
00:14:30.459    06:24:47	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:14:30.459    06:24:47	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:14:30.459    06:24:47	-- scripts/common.sh@367 -- # return 0
00:14:30.459    06:24:47	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:30.459    06:24:47	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:14:30.459  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:30.459  		--rc genhtml_branch_coverage=1
00:14:30.459  		--rc genhtml_function_coverage=1
00:14:30.459  		--rc genhtml_legend=1
00:14:30.459  		--rc geninfo_all_blocks=1
00:14:30.459  		--rc geninfo_unexecuted_blocks=1
00:14:30.459  		
00:14:30.459  		'
00:14:30.459    06:24:47	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:14:30.459  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:30.459  		--rc genhtml_branch_coverage=1
00:14:30.459  		--rc genhtml_function_coverage=1
00:14:30.459  		--rc genhtml_legend=1
00:14:30.459  		--rc geninfo_all_blocks=1
00:14:30.459  		--rc geninfo_unexecuted_blocks=1
00:14:30.459  		
00:14:30.459  		'
00:14:30.459    06:24:47	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:14:30.459  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:30.459  		--rc genhtml_branch_coverage=1
00:14:30.459  		--rc genhtml_function_coverage=1
00:14:30.459  		--rc genhtml_legend=1
00:14:30.459  		--rc geninfo_all_blocks=1
00:14:30.459  		--rc geninfo_unexecuted_blocks=1
00:14:30.459  		
00:14:30.459  		'
00:14:30.459    06:24:47	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:14:30.459  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:30.459  		--rc genhtml_branch_coverage=1
00:14:30.459  		--rc genhtml_function_coverage=1
00:14:30.459  		--rc genhtml_legend=1
00:14:30.459  		--rc geninfo_all_blocks=1
00:14:30.459  		--rc geninfo_unexecuted_blocks=1
00:14:30.459  		
00:14:30.459  		'
00:14:30.459   06:24:47	-- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:14:30.459     06:24:47	-- nvmf/common.sh@7 -- # uname -s
00:14:30.459    06:24:47	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:14:30.459    06:24:47	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:14:30.459    06:24:47	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:14:30.459    06:24:47	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:14:30.459    06:24:47	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:14:30.459    06:24:47	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:14:30.459    06:24:47	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:14:30.459    06:24:47	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:14:30.459    06:24:47	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:14:30.459     06:24:47	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:14:30.459    06:24:47	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:14:30.459    06:24:47	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:14:30.459    06:24:47	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:14:30.459    06:24:47	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:14:30.459    06:24:47	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:14:30.459    06:24:47	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:14:30.459     06:24:47	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:14:30.459     06:24:47	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:14:30.459     06:24:47	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:14:30.459      06:24:47	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:30.459      06:24:47	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:30.459      06:24:47	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:30.459      06:24:47	-- paths/export.sh@5 -- # export PATH
00:14:30.459      06:24:47	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:30.459    06:24:47	-- nvmf/common.sh@46 -- # : 0
00:14:30.459    06:24:47	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:14:30.459    06:24:47	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:14:30.459    06:24:47	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:14:30.459    06:24:47	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:14:30.459    06:24:47	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:14:30.459    06:24:47	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:14:30.459    06:24:47	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:14:30.459    06:24:47	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:14:30.459   06:24:47	-- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:30.459   06:24:47	-- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:14:30.459   06:24:47	-- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit
00:14:30.460   06:24:47	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:14:30.460   06:24:47	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:14:30.460   06:24:47	-- nvmf/common.sh@436 -- # prepare_net_devs
00:14:30.460   06:24:47	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:14:30.460   06:24:47	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:14:30.460   06:24:47	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:14:30.460   06:24:47	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:14:30.460    06:24:47	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:14:30.460   06:24:47	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:14:30.460   06:24:47	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:14:30.460   06:24:47	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:14:30.460   06:24:47	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:14:30.460   06:24:47	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:14:30.460   06:24:47	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:14:30.460   06:24:47	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:14:30.460   06:24:47	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:14:30.460   06:24:47	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:14:30.460   06:24:47	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:14:30.460   06:24:47	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:14:30.460   06:24:47	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:14:30.460   06:24:47	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:14:30.460   06:24:47	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:14:30.460   06:24:47	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:14:30.460   06:24:47	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:14:30.460   06:24:47	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:14:30.460   06:24:47	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:14:30.460   06:24:47	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:14:30.460   06:24:47	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:14:30.460  Cannot find device "nvmf_tgt_br"
00:14:30.460   06:24:47	-- nvmf/common.sh@154 -- # true
00:14:30.460   06:24:47	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:14:30.460  Cannot find device "nvmf_tgt_br2"
00:14:30.460   06:24:47	-- nvmf/common.sh@155 -- # true
00:14:30.460   06:24:47	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:14:30.460   06:24:47	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:14:30.460  Cannot find device "nvmf_tgt_br"
00:14:30.460   06:24:47	-- nvmf/common.sh@157 -- # true
00:14:30.460   06:24:47	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:14:30.460  Cannot find device "nvmf_tgt_br2"
00:14:30.460   06:24:47	-- nvmf/common.sh@158 -- # true
00:14:30.460   06:24:47	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:14:30.718   06:24:47	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:14:30.718   06:24:47	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:14:30.718  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:14:30.718   06:24:47	-- nvmf/common.sh@161 -- # true
00:14:30.718   06:24:47	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:14:30.718  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:14:30.718   06:24:47	-- nvmf/common.sh@162 -- # true
00:14:30.718   06:24:47	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:14:30.718   06:24:47	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:14:30.718   06:24:47	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:14:30.718   06:24:47	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:14:30.718   06:24:47	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:14:30.718   06:24:47	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:14:30.718   06:24:47	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:14:30.718   06:24:47	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:14:30.718   06:24:47	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:14:30.718   06:24:47	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:14:30.718   06:24:47	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:14:30.718   06:24:47	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:14:30.718   06:24:47	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:14:30.718   06:24:47	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:14:30.718   06:24:47	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:14:30.718   06:24:47	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:14:30.718   06:24:47	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:14:30.718   06:24:47	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:14:30.718   06:24:47	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:14:30.718   06:24:47	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:14:30.718   06:24:47	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:14:30.718   06:24:47	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:14:30.718   06:24:47	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:14:30.718   06:24:47	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:14:30.718  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:14:30.718  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms
00:14:30.718  
00:14:30.718  --- 10.0.0.2 ping statistics ---
00:14:30.718  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:14:30.718  rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms
00:14:30.718   06:24:47	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:14:30.718  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:14:30.718  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.028 ms
00:14:30.718  
00:14:30.718  --- 10.0.0.3 ping statistics ---
00:14:30.718  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:14:30.718  rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms
00:14:30.718   06:24:47	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:14:30.718  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:14:30.718  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms
00:14:30.718  
00:14:30.718  --- 10.0.0.1 ping statistics ---
00:14:30.718  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:14:30.718  rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms
00:14:30.718   06:24:47	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:14:30.718   06:24:47	-- nvmf/common.sh@421 -- # return 0
00:14:30.718   06:24:47	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:14:30.718   06:24:47	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:14:30.718   06:24:47	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:14:30.718   06:24:47	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:14:30.718   06:24:47	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:14:30.719   06:24:47	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:14:30.719   06:24:47	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:14:30.977   06:24:47	-- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1
00:14:30.977   06:24:47	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:14:30.977   06:24:47	-- common/autotest_common.sh@722 -- # xtrace_disable
00:14:30.977   06:24:47	-- common/autotest_common.sh@10 -- # set +x
00:14:30.977  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:30.977   06:24:47	-- nvmf/common.sh@469 -- # nvmfpid=72879
00:14:30.977   06:24:47	-- nvmf/common.sh@470 -- # waitforlisten 72879
00:14:30.977   06:24:47	-- common/autotest_common.sh@829 -- # '[' -z 72879 ']'
00:14:30.977   06:24:47	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1
00:14:30.977   06:24:47	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:30.977   06:24:47	-- common/autotest_common.sh@834 -- # local max_retries=100
00:14:30.977   06:24:47	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:30.977   06:24:47	-- common/autotest_common.sh@838 -- # xtrace_disable
00:14:30.977   06:24:47	-- common/autotest_common.sh@10 -- # set +x
00:14:30.977  [2024-12-16 06:24:47.773309] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:14:30.977  [2024-12-16 06:24:47.773398] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:14:30.977  [2024-12-16 06:24:47.911424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:31.236  [2024-12-16 06:24:48.014555] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:14:31.236  [2024-12-16 06:24:48.014974] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:14:31.236  [2024-12-16 06:24:48.015001] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:14:31.236  [2024-12-16 06:24:48.015016] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:14:31.236  [2024-12-16 06:24:48.015055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:14:31.804   06:24:48	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:14:31.804   06:24:48	-- common/autotest_common.sh@862 -- # return 0
00:14:31.804   06:24:48	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:14:31.804   06:24:48	-- common/autotest_common.sh@728 -- # xtrace_disable
00:14:31.804   06:24:48	-- common/autotest_common.sh@10 -- # set +x
00:14:32.063   06:24:48	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:14:32.063   06:24:48	-- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:14:32.321  [2024-12-16 06:24:49.095239] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:14:32.321   06:24:49	-- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow
00:14:32.321   06:24:49	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:14:32.321   06:24:49	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:14:32.321   06:24:49	-- common/autotest_common.sh@10 -- # set +x
00:14:32.321  ************************************
00:14:32.321  START TEST lvs_grow_clean
00:14:32.321  ************************************
00:14:32.321   06:24:49	-- common/autotest_common.sh@1114 -- # lvs_grow
00:14:32.321   06:24:49	-- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol
00:14:32.321   06:24:49	-- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters
00:14:32.321   06:24:49	-- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid
00:14:32.321   06:24:49	-- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200
00:14:32.321   06:24:49	-- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400
00:14:32.321   06:24:49	-- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150
00:14:32.321   06:24:49	-- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev
00:14:32.321   06:24:49	-- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev
00:14:32.321    06:24:49	-- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:14:32.580   06:24:49	-- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev
00:14:32.580    06:24:49	-- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs
00:14:32.839   06:24:49	-- target/nvmf_lvs_grow.sh@28 -- # lvs=cfa60f47-133d-4746-9606-a2071cb25c56
00:14:32.839    06:24:49	-- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfa60f47-133d-4746-9606-a2071cb25c56
00:14:32.839    06:24:49	-- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters'
00:14:33.097   06:24:49	-- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49
00:14:33.097   06:24:49	-- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 ))
00:14:33.097    06:24:49	-- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u cfa60f47-133d-4746-9606-a2071cb25c56 lvol 150
00:14:33.356   06:24:50	-- target/nvmf_lvs_grow.sh@33 -- # lvol=4af2e175-3929-4311-8e26-fd0eec2443aa
00:14:33.356   06:24:50	-- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev
00:14:33.356   06:24:50	-- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev
00:14:33.614  [2024-12-16 06:24:50.448439] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400
00:14:33.614  [2024-12-16 06:24:50.448541] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1
00:14:33.614  true
00:14:33.614    06:24:50	-- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfa60f47-133d-4746-9606-a2071cb25c56
00:14:33.614    06:24:50	-- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters'
00:14:33.873   06:24:50	-- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 ))
00:14:33.873   06:24:50	-- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:14:34.131   06:24:50	-- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4af2e175-3929-4311-8e26-fd0eec2443aa
00:14:34.390   06:24:51	-- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:14:34.390  [2024-12-16 06:24:51.350123] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:14:34.649   06:24:51	-- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:14:34.649   06:24:51	-- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z
00:14:34.649   06:24:51	-- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73041
00:14:34.649   06:24:51	-- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:14:34.649   06:24:51	-- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73041 /var/tmp/bdevperf.sock
00:14:34.649   06:24:51	-- common/autotest_common.sh@829 -- # '[' -z 73041 ']'
00:14:34.649   06:24:51	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:14:34.649   06:24:51	-- common/autotest_common.sh@834 -- # local max_retries=100
00:14:34.649  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:14:34.649   06:24:51	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:14:34.649   06:24:51	-- common/autotest_common.sh@838 -- # xtrace_disable
00:14:34.649   06:24:51	-- common/autotest_common.sh@10 -- # set +x
00:14:34.649  [2024-12-16 06:24:51.618621] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:14:34.649  [2024-12-16 06:24:51.618711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73041 ]
00:14:34.908  [2024-12-16 06:24:51.751989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:34.908  [2024-12-16 06:24:51.858724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:14:35.844   06:24:52	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:14:35.844   06:24:52	-- common/autotest_common.sh@862 -- # return 0
00:14:35.844   06:24:52	-- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0
00:14:36.103  Nvme0n1
00:14:36.103   06:24:52	-- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000
00:14:36.361  [
00:14:36.361    {
00:14:36.361      "aliases": [
00:14:36.361        "4af2e175-3929-4311-8e26-fd0eec2443aa"
00:14:36.361      ],
00:14:36.361      "assigned_rate_limits": {
00:14:36.361        "r_mbytes_per_sec": 0,
00:14:36.361        "rw_ios_per_sec": 0,
00:14:36.361        "rw_mbytes_per_sec": 0,
00:14:36.361        "w_mbytes_per_sec": 0
00:14:36.361      },
00:14:36.361      "block_size": 4096,
00:14:36.361      "claimed": false,
00:14:36.361      "driver_specific": {
00:14:36.361        "mp_policy": "active_passive",
00:14:36.361        "nvme": [
00:14:36.361          {
00:14:36.361            "ctrlr_data": {
00:14:36.361              "ana_reporting": false,
00:14:36.361              "cntlid": 1,
00:14:36.361              "firmware_revision": "24.01.1",
00:14:36.361              "model_number": "SPDK bdev Controller",
00:14:36.361              "multi_ctrlr": true,
00:14:36.361              "oacs": {
00:14:36.361                "firmware": 0,
00:14:36.361                "format": 0,
00:14:36.361                "ns_manage": 0,
00:14:36.361                "security": 0
00:14:36.361              },
00:14:36.361              "serial_number": "SPDK0",
00:14:36.361              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:14:36.361              "vendor_id": "0x8086"
00:14:36.361            },
00:14:36.361            "ns_data": {
00:14:36.361              "can_share": true,
00:14:36.361              "id": 1
00:14:36.361            },
00:14:36.361            "trid": {
00:14:36.361              "adrfam": "IPv4",
00:14:36.361              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:14:36.361              "traddr": "10.0.0.2",
00:14:36.361              "trsvcid": "4420",
00:14:36.361              "trtype": "TCP"
00:14:36.361            },
00:14:36.361            "vs": {
00:14:36.361              "nvme_version": "1.3"
00:14:36.361            }
00:14:36.361          }
00:14:36.361        ]
00:14:36.361      },
00:14:36.361      "name": "Nvme0n1",
00:14:36.362      "num_blocks": 38912,
00:14:36.362      "product_name": "NVMe disk",
00:14:36.362      "supported_io_types": {
00:14:36.362        "abort": true,
00:14:36.362        "compare": true,
00:14:36.362        "compare_and_write": true,
00:14:36.362        "flush": true,
00:14:36.362        "nvme_admin": true,
00:14:36.362        "nvme_io": true,
00:14:36.362        "read": true,
00:14:36.362        "reset": true,
00:14:36.362        "unmap": true,
00:14:36.362        "write": true,
00:14:36.362        "write_zeroes": true
00:14:36.362      },
00:14:36.362      "uuid": "4af2e175-3929-4311-8e26-fd0eec2443aa",
00:14:36.362      "zoned": false
00:14:36.362    }
00:14:36.362  ]
00:14:36.362   06:24:53	-- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:14:36.362   06:24:53	-- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73088
00:14:36.362   06:24:53	-- target/nvmf_lvs_grow.sh@57 -- # sleep 2
00:14:36.362  Running I/O for 10 seconds...
00:14:37.298                                                                                                  Latency(us)
00:14:37.298  
[2024-12-16T06:24:54.274Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:37.298  
[2024-12-16T06:24:54.274Z]  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:14:37.298  	 Nvme0n1             :       1.00    7408.00      28.94       0.00     0.00       0.00       0.00       0.00
00:14:37.298  
[2024-12-16T06:24:54.274Z]  ===================================================================================================================
00:14:37.298  
[2024-12-16T06:24:54.274Z]  Total                       :               7408.00      28.94       0.00     0.00       0.00       0.00       0.00
00:14:37.298  
00:14:38.235   06:24:55	-- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cfa60f47-133d-4746-9606-a2071cb25c56
00:14:38.235  
[2024-12-16T06:24:55.211Z]  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:14:38.235  	 Nvme0n1             :       2.00    7301.00      28.52       0.00     0.00       0.00       0.00       0.00
00:14:38.235  
[2024-12-16T06:24:55.211Z]  ===================================================================================================================
00:14:38.235  
[2024-12-16T06:24:55.211Z]  Total                       :               7301.00      28.52       0.00     0.00       0.00       0.00       0.00
00:14:38.235  
00:14:38.495  true
00:14:38.495    06:24:55	-- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfa60f47-133d-4746-9606-a2071cb25c56
00:14:38.495    06:24:55	-- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters'
00:14:39.062   06:24:55	-- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99
00:14:39.062   06:24:55	-- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 ))
00:14:39.062   06:24:55	-- target/nvmf_lvs_grow.sh@65 -- # wait 73088
00:14:39.321  
[2024-12-16T06:24:56.297Z]  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:14:39.321  	 Nvme0n1             :       3.00    7220.00      28.20       0.00     0.00       0.00       0.00       0.00
00:14:39.321  
[2024-12-16T06:24:56.297Z]  ===================================================================================================================
00:14:39.321  
[2024-12-16T06:24:56.297Z]  Total                       :               7220.00      28.20       0.00     0.00       0.00       0.00       0.00
00:14:39.321  
00:14:40.257  
[2024-12-16T06:24:57.233Z]  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:14:40.257  	 Nvme0n1             :       4.00    7271.25      28.40       0.00     0.00       0.00       0.00       0.00
00:14:40.257  
[2024-12-16T06:24:57.233Z]  ===================================================================================================================
00:14:40.257  
[2024-12-16T06:24:57.233Z]  Total                       :               7271.25      28.40       0.00     0.00       0.00       0.00       0.00
00:14:40.257  
00:14:41.634  
[2024-12-16T06:24:58.610Z]  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:14:41.634  	 Nvme0n1             :       5.00    7289.00      28.47       0.00     0.00       0.00       0.00       0.00
00:14:41.634  
[2024-12-16T06:24:58.610Z]  ===================================================================================================================
00:14:41.634  
[2024-12-16T06:24:58.610Z]  Total                       :               7289.00      28.47       0.00     0.00       0.00       0.00       0.00
00:14:41.634  
00:14:42.271  
[2024-12-16T06:24:59.247Z]  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:14:42.271  	 Nvme0n1             :       6.00    7308.33      28.55       0.00     0.00       0.00       0.00       0.00
00:14:42.271  
[2024-12-16T06:24:59.247Z]  ===================================================================================================================
00:14:42.271  
[2024-12-16T06:24:59.247Z]  Total                       :               7308.33      28.55       0.00     0.00       0.00       0.00       0.00
00:14:42.271  
00:14:43.647  
[2024-12-16T06:25:00.623Z]  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:14:43.647  	 Nvme0n1             :       7.00    7296.14      28.50       0.00     0.00       0.00       0.00       0.00
00:14:43.647  
[2024-12-16T06:25:00.623Z]  ===================================================================================================================
00:14:43.647  
[2024-12-16T06:25:00.623Z]  Total                       :               7296.14      28.50       0.00     0.00       0.00       0.00       0.00
00:14:43.647  
00:14:44.582  
[2024-12-16T06:25:01.558Z]  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:14:44.582  	 Nvme0n1             :       8.00    7288.50      28.47       0.00     0.00       0.00       0.00       0.00
00:14:44.582  
[2024-12-16T06:25:01.558Z]  ===================================================================================================================
00:14:44.582  
[2024-12-16T06:25:01.558Z]  Total                       :               7288.50      28.47       0.00     0.00       0.00       0.00       0.00
00:14:44.582  
00:14:45.519  
[2024-12-16T06:25:02.495Z]  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:14:45.519  	 Nvme0n1             :       9.00    7266.44      28.38       0.00     0.00       0.00       0.00       0.00
00:14:45.519  
[2024-12-16T06:25:02.495Z]  ===================================================================================================================
00:14:45.519  
[2024-12-16T06:25:02.495Z]  Total                       :               7266.44      28.38       0.00     0.00       0.00       0.00       0.00
00:14:45.519  
00:14:46.454  
[2024-12-16T06:25:03.430Z]  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:14:46.454  	 Nvme0n1             :      10.00    7240.90      28.28       0.00     0.00       0.00       0.00       0.00
00:14:46.454  
[2024-12-16T06:25:03.430Z]  ===================================================================================================================
00:14:46.454  
[2024-12-16T06:25:03.431Z]  Total                       :               7240.90      28.28       0.00     0.00       0.00       0.00       0.00
00:14:46.455  
00:14:46.455  
00:14:46.455                                                                                                  Latency(us)
00:14:46.455  
[2024-12-16T06:25:03.431Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:46.455  
[2024-12-16T06:25:03.431Z]  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:14:46.455  	 Nvme0n1             :      10.01    7249.91      28.32       0.00     0.00   17651.11    4974.78   57671.68
00:14:46.455  
[2024-12-16T06:25:03.431Z]  ===================================================================================================================
00:14:46.455  
[2024-12-16T06:25:03.431Z]  Total                       :               7249.91      28.32       0.00     0.00   17651.11    4974.78   57671.68
00:14:46.455  0
00:14:46.455   06:25:03	-- target/nvmf_lvs_grow.sh@66 -- # killprocess 73041
00:14:46.455   06:25:03	-- common/autotest_common.sh@936 -- # '[' -z 73041 ']'
00:14:46.455   06:25:03	-- common/autotest_common.sh@940 -- # kill -0 73041
00:14:46.455    06:25:03	-- common/autotest_common.sh@941 -- # uname
00:14:46.455   06:25:03	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:14:46.455    06:25:03	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73041
00:14:46.455   06:25:03	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:14:46.455  killing process with pid 73041
00:14:46.455   06:25:03	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:14:46.455   06:25:03	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 73041'
00:14:46.455  Received shutdown signal, test time was about 10.000000 seconds
00:14:46.455  
00:14:46.455                                                                                                  Latency(us)
00:14:46.455  
[2024-12-16T06:25:03.431Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:46.455  
[2024-12-16T06:25:03.431Z]  ===================================================================================================================
00:14:46.455  
[2024-12-16T06:25:03.431Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:14:46.455   06:25:03	-- common/autotest_common.sh@955 -- # kill 73041
00:14:46.455   06:25:03	-- common/autotest_common.sh@960 -- # wait 73041
00:14:46.713   06:25:03	-- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:14:46.972    06:25:03	-- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters'
00:14:46.972    06:25:03	-- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfa60f47-133d-4746-9606-a2071cb25c56
00:14:47.230   06:25:04	-- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61
00:14:47.230   06:25:04	-- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]]
00:14:47.230   06:25:04	-- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:14:47.488  [2024-12-16 06:25:04.355073] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs
00:14:47.488   06:25:04	-- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfa60f47-133d-4746-9606-a2071cb25c56
00:14:47.488   06:25:04	-- common/autotest_common.sh@650 -- # local es=0
00:14:47.488   06:25:04	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfa60f47-133d-4746-9606-a2071cb25c56
00:14:47.488   06:25:04	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:47.488   06:25:04	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:14:47.488    06:25:04	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:47.488   06:25:04	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:14:47.488    06:25:04	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:47.488   06:25:04	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:14:47.488   06:25:04	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:47.488   06:25:04	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:14:47.488   06:25:04	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfa60f47-133d-4746-9606-a2071cb25c56
00:14:47.746  2024/12/16 06:25:04 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:cfa60f47-133d-4746-9606-a2071cb25c56], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device
00:14:47.746  request:
00:14:47.746  {
00:14:47.746    "method": "bdev_lvol_get_lvstores",
00:14:47.746    "params": {
00:14:47.747      "uuid": "cfa60f47-133d-4746-9606-a2071cb25c56"
00:14:47.747    }
00:14:47.747  }
00:14:47.747  Got JSON-RPC error response
00:14:47.747  GoRPCClient: error on JSON-RPC call
00:14:47.747   06:25:04	-- common/autotest_common.sh@653 -- # es=1
00:14:47.747   06:25:04	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:14:47.747   06:25:04	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:14:47.747   06:25:04	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:14:47.747   06:25:04	-- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:14:48.005  aio_bdev
00:14:48.005   06:25:04	-- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 4af2e175-3929-4311-8e26-fd0eec2443aa
00:14:48.005   06:25:04	-- common/autotest_common.sh@897 -- # local bdev_name=4af2e175-3929-4311-8e26-fd0eec2443aa
00:14:48.005   06:25:04	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:14:48.005   06:25:04	-- common/autotest_common.sh@899 -- # local i
00:14:48.005   06:25:04	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:14:48.005   06:25:04	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:14:48.005   06:25:04	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine
00:14:48.263   06:25:05	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4af2e175-3929-4311-8e26-fd0eec2443aa -t 2000
00:14:48.522  [
00:14:48.522    {
00:14:48.522      "aliases": [
00:14:48.522        "lvs/lvol"
00:14:48.522      ],
00:14:48.522      "assigned_rate_limits": {
00:14:48.522        "r_mbytes_per_sec": 0,
00:14:48.522        "rw_ios_per_sec": 0,
00:14:48.522        "rw_mbytes_per_sec": 0,
00:14:48.522        "w_mbytes_per_sec": 0
00:14:48.522      },
00:14:48.522      "block_size": 4096,
00:14:48.522      "claimed": false,
00:14:48.522      "driver_specific": {
00:14:48.522        "lvol": {
00:14:48.522          "base_bdev": "aio_bdev",
00:14:48.522          "clone": false,
00:14:48.522          "esnap_clone": false,
00:14:48.522          "lvol_store_uuid": "cfa60f47-133d-4746-9606-a2071cb25c56",
00:14:48.522          "snapshot": false,
00:14:48.522          "thin_provision": false
00:14:48.522        }
00:14:48.522      },
00:14:48.522      "name": "4af2e175-3929-4311-8e26-fd0eec2443aa",
00:14:48.522      "num_blocks": 38912,
00:14:48.522      "product_name": "Logical Volume",
00:14:48.522      "supported_io_types": {
00:14:48.522        "abort": false,
00:14:48.522        "compare": false,
00:14:48.522        "compare_and_write": false,
00:14:48.522        "flush": false,
00:14:48.522        "nvme_admin": false,
00:14:48.522        "nvme_io": false,
00:14:48.522        "read": true,
00:14:48.522        "reset": true,
00:14:48.522        "unmap": true,
00:14:48.522        "write": true,
00:14:48.522        "write_zeroes": true
00:14:48.522      },
00:14:48.522      "uuid": "4af2e175-3929-4311-8e26-fd0eec2443aa",
00:14:48.522      "zoned": false
00:14:48.522    }
00:14:48.522  ]
00:14:48.522   06:25:05	-- common/autotest_common.sh@905 -- # return 0
00:14:48.522    06:25:05	-- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfa60f47-133d-4746-9606-a2071cb25c56
00:14:48.522    06:25:05	-- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters'
00:14:48.780   06:25:05	-- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 ))
00:14:48.780    06:25:05	-- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfa60f47-133d-4746-9606-a2071cb25c56
00:14:48.780    06:25:05	-- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters'
00:14:49.039   06:25:05	-- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 ))
00:14:49.039   06:25:05	-- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4af2e175-3929-4311-8e26-fd0eec2443aa
00:14:49.297   06:25:06	-- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cfa60f47-133d-4746-9606-a2071cb25c56
00:14:49.555   06:25:06	-- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:14:49.814   06:25:06	-- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev
00:14:50.072  ************************************
00:14:50.072  END TEST lvs_grow_clean
00:14:50.072  ************************************
00:14:50.072  
00:14:50.072  real	0m17.844s
00:14:50.072  user	0m17.026s
00:14:50.072  sys	0m2.258s
00:14:50.072   06:25:06	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:14:50.072   06:25:06	-- common/autotest_common.sh@10 -- # set +x
00:14:50.072   06:25:07	-- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty
00:14:50.072   06:25:07	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:14:50.072   06:25:07	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:14:50.072   06:25:07	-- common/autotest_common.sh@10 -- # set +x
00:14:50.072  ************************************
00:14:50.072  START TEST lvs_grow_dirty
00:14:50.072  ************************************
00:14:50.072   06:25:07	-- common/autotest_common.sh@1114 -- # lvs_grow dirty
00:14:50.072   06:25:07	-- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol
00:14:50.072   06:25:07	-- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters
00:14:50.072   06:25:07	-- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid
00:14:50.072   06:25:07	-- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200
00:14:50.072   06:25:07	-- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400
00:14:50.072   06:25:07	-- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150
00:14:50.072   06:25:07	-- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev
00:14:50.072   06:25:07	-- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev
00:14:50.072    06:25:07	-- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:14:50.331   06:25:07	-- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev
00:14:50.331    06:25:07	-- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs
00:14:50.589   06:25:07	-- target/nvmf_lvs_grow.sh@28 -- # lvs=894061ee-d675-4c82-9153-a46b23757137
00:14:50.589    06:25:07	-- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters'
00:14:50.589    06:25:07	-- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 894061ee-d675-4c82-9153-a46b23757137
00:14:50.847   06:25:07	-- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49
00:14:50.847   06:25:07	-- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 ))
00:14:50.847    06:25:07	-- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 894061ee-d675-4c82-9153-a46b23757137 lvol 150
00:14:51.106   06:25:07	-- target/nvmf_lvs_grow.sh@33 -- # lvol=336524a6-5937-4c21-94cb-1a1f01252c43
00:14:51.106   06:25:07	-- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev
00:14:51.106   06:25:07	-- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev
00:14:51.365  [2024-12-16 06:25:08.186371] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400
00:14:51.365  [2024-12-16 06:25:08.186439] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1
00:14:51.365  true
00:14:51.365    06:25:08	-- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 894061ee-d675-4c82-9153-a46b23757137
00:14:51.365    06:25:08	-- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters'
00:14:51.623   06:25:08	-- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 ))
00:14:51.623   06:25:08	-- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:14:51.882   06:25:08	-- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 336524a6-5937-4c21-94cb-1a1f01252c43
00:14:52.140   06:25:08	-- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:14:52.399   06:25:09	-- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:14:52.399   06:25:09	-- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73471
00:14:52.399   06:25:09	-- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:14:52.399   06:25:09	-- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z
00:14:52.399   06:25:09	-- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73471 /var/tmp/bdevperf.sock
00:14:52.399   06:25:09	-- common/autotest_common.sh@829 -- # '[' -z 73471 ']'
00:14:52.399   06:25:09	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:14:52.399   06:25:09	-- common/autotest_common.sh@834 -- # local max_retries=100
00:14:52.399   06:25:09	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:14:52.399  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:14:52.399   06:25:09	-- common/autotest_common.sh@838 -- # xtrace_disable
00:14:52.399   06:25:09	-- common/autotest_common.sh@10 -- # set +x
00:14:52.658  [2024-12-16 06:25:09.391251] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:14:52.658  [2024-12-16 06:25:09.391322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73471 ]
00:14:52.658  [2024-12-16 06:25:09.524257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:52.658  [2024-12-16 06:25:09.629866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:14:53.594   06:25:10	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:14:53.594   06:25:10	-- common/autotest_common.sh@862 -- # return 0
00:14:53.594   06:25:10	-- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0
00:14:53.852  Nvme0n1
00:14:53.852   06:25:10	-- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000
00:14:54.111  [
00:14:54.111    {
00:14:54.111      "aliases": [
00:14:54.111        "336524a6-5937-4c21-94cb-1a1f01252c43"
00:14:54.111      ],
00:14:54.111      "assigned_rate_limits": {
00:14:54.111        "r_mbytes_per_sec": 0,
00:14:54.111        "rw_ios_per_sec": 0,
00:14:54.111        "rw_mbytes_per_sec": 0,
00:14:54.111        "w_mbytes_per_sec": 0
00:14:54.111      },
00:14:54.111      "block_size": 4096,
00:14:54.111      "claimed": false,
00:14:54.111      "driver_specific": {
00:14:54.111        "mp_policy": "active_passive",
00:14:54.111        "nvme": [
00:14:54.111          {
00:14:54.111            "ctrlr_data": {
00:14:54.111              "ana_reporting": false,
00:14:54.111              "cntlid": 1,
00:14:54.111              "firmware_revision": "24.01.1",
00:14:54.111              "model_number": "SPDK bdev Controller",
00:14:54.112              "multi_ctrlr": true,
00:14:54.112              "oacs": {
00:14:54.112                "firmware": 0,
00:14:54.112                "format": 0,
00:14:54.112                "ns_manage": 0,
00:14:54.112                "security": 0
00:14:54.112              },
00:14:54.112              "serial_number": "SPDK0",
00:14:54.112              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:14:54.112              "vendor_id": "0x8086"
00:14:54.112            },
00:14:54.112            "ns_data": {
00:14:54.112              "can_share": true,
00:14:54.112              "id": 1
00:14:54.112            },
00:14:54.112            "trid": {
00:14:54.112              "adrfam": "IPv4",
00:14:54.112              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:14:54.112              "traddr": "10.0.0.2",
00:14:54.112              "trsvcid": "4420",
00:14:54.112              "trtype": "TCP"
00:14:54.112            },
00:14:54.112            "vs": {
00:14:54.112              "nvme_version": "1.3"
00:14:54.112            }
00:14:54.112          }
00:14:54.112        ]
00:14:54.112      },
00:14:54.112      "name": "Nvme0n1",
00:14:54.112      "num_blocks": 38912,
00:14:54.112      "product_name": "NVMe disk",
00:14:54.112      "supported_io_types": {
00:14:54.112        "abort": true,
00:14:54.112        "compare": true,
00:14:54.112        "compare_and_write": true,
00:14:54.112        "flush": true,
00:14:54.112        "nvme_admin": true,
00:14:54.112        "nvme_io": true,
00:14:54.112        "read": true,
00:14:54.112        "reset": true,
00:14:54.112        "unmap": true,
00:14:54.112        "write": true,
00:14:54.112        "write_zeroes": true
00:14:54.112      },
00:14:54.112      "uuid": "336524a6-5937-4c21-94cb-1a1f01252c43",
00:14:54.112      "zoned": false
00:14:54.112    }
00:14:54.112  ]
00:14:54.112   06:25:10	-- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73518
00:14:54.112   06:25:10	-- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:14:54.112   06:25:10	-- target/nvmf_lvs_grow.sh@57 -- # sleep 2
00:14:54.112  Running I/O for 10 seconds...
00:14:55.048                                                                                                  Latency(us)
00:14:55.048  
[2024-12-16T06:25:12.024Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:55.048  
[2024-12-16T06:25:12.024Z]  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:14:55.048  	 Nvme0n1             :       1.00    8409.00      32.85       0.00     0.00       0.00       0.00       0.00
00:14:55.048  
[2024-12-16T06:25:12.024Z]  ===================================================================================================================
00:14:55.048  
[2024-12-16T06:25:12.024Z]  Total                       :               8409.00      32.85       0.00     0.00       0.00       0.00       0.00
00:14:55.048  
00:14:55.983   06:25:12	-- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 894061ee-d675-4c82-9153-a46b23757137
00:14:56.241  
[2024-12-16T06:25:13.217Z]  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:14:56.241  	 Nvme0n1             :       2.00    8569.50      33.47       0.00     0.00       0.00       0.00       0.00
00:14:56.241  
[2024-12-16T06:25:13.217Z]  ===================================================================================================================
00:14:56.241  
[2024-12-16T06:25:13.217Z]  Total                       :               8569.50      33.47       0.00     0.00       0.00       0.00       0.00
00:14:56.241  
00:14:56.241  true
00:14:56.241    06:25:13	-- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 894061ee-d675-4c82-9153-a46b23757137
00:14:56.241    06:25:13	-- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters'
00:14:56.500   06:25:13	-- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99
00:14:56.500   06:25:13	-- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 ))
00:14:56.500   06:25:13	-- target/nvmf_lvs_grow.sh@65 -- # wait 73518
00:14:57.066  
[2024-12-16T06:25:14.042Z]  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:14:57.066  	 Nvme0n1             :       3.00    8618.00      33.66       0.00     0.00       0.00       0.00       0.00
00:14:57.066  
[2024-12-16T06:25:14.042Z]  ===================================================================================================================
00:14:57.066  
[2024-12-16T06:25:14.042Z]  Total                       :               8618.00      33.66       0.00     0.00       0.00       0.00       0.00
00:14:57.066  
00:14:58.001  
[2024-12-16T06:25:14.977Z]  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:14:58.001  	 Nvme0n1             :       4.00    8646.50      33.78       0.00     0.00       0.00       0.00       0.00
00:14:58.001  
[2024-12-16T06:25:14.977Z]  ===================================================================================================================
00:14:58.001  
[2024-12-16T06:25:14.977Z]  Total                       :               8646.50      33.78       0.00     0.00       0.00       0.00       0.00
00:14:58.001  
00:14:59.034  
[2024-12-16T06:25:16.010Z]  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:14:59.034  	 Nvme0n1             :       5.00    8626.40      33.70       0.00     0.00       0.00       0.00       0.00
00:14:59.034  
[2024-12-16T06:25:16.010Z]  ===================================================================================================================
00:14:59.034  
[2024-12-16T06:25:16.010Z]  Total                       :               8626.40      33.70       0.00     0.00       0.00       0.00       0.00
00:14:59.034  
00:15:00.410  
[2024-12-16T06:25:17.386Z]  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:15:00.410  	 Nvme0n1             :       6.00    8616.67      33.66       0.00     0.00       0.00       0.00       0.00
00:15:00.410  
[2024-12-16T06:25:17.386Z]  ===================================================================================================================
00:15:00.410  
[2024-12-16T06:25:17.386Z]  Total                       :               8616.67      33.66       0.00     0.00       0.00       0.00       0.00
00:15:00.410  
00:15:01.345  
[2024-12-16T06:25:18.322Z]  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:15:01.346  	 Nvme0n1             :       7.00    8574.29      33.49       0.00     0.00       0.00       0.00       0.00
00:15:01.346  
[2024-12-16T06:25:18.322Z]  ===================================================================================================================
00:15:01.346  
[2024-12-16T06:25:18.322Z]  Total                       :               8574.29      33.49       0.00     0.00       0.00       0.00       0.00
00:15:01.346  
00:15:02.282  
[2024-12-16T06:25:19.258Z]  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:15:02.282  	 Nvme0n1             :       8.00    8517.62      33.27       0.00     0.00       0.00       0.00       0.00
00:15:02.282  
[2024-12-16T06:25:19.258Z]  ===================================================================================================================
00:15:02.282  
[2024-12-16T06:25:19.258Z]  Total                       :               8517.62      33.27       0.00     0.00       0.00       0.00       0.00
00:15:02.282  
00:15:03.218  
[2024-12-16T06:25:20.194Z]  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:15:03.218  	 Nvme0n1             :       9.00    8478.33      33.12       0.00     0.00       0.00       0.00       0.00
00:15:03.218  
[2024-12-16T06:25:20.194Z]  ===================================================================================================================
00:15:03.218  
[2024-12-16T06:25:20.194Z]  Total                       :               8478.33      33.12       0.00     0.00       0.00       0.00       0.00
00:15:03.218  
00:15:04.154  
[2024-12-16T06:25:21.130Z]  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:15:04.154  	 Nvme0n1             :      10.00    8428.90      32.93       0.00     0.00       0.00       0.00       0.00
00:15:04.154  
[2024-12-16T06:25:21.130Z]  ===================================================================================================================
00:15:04.154  
[2024-12-16T06:25:21.130Z]  Total                       :               8428.90      32.93       0.00     0.00       0.00       0.00       0.00
00:15:04.154  
00:15:04.154  
00:15:04.154                                                                                                  Latency(us)
00:15:04.154  
[2024-12-16T06:25:21.130Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:15:04.154  
[2024-12-16T06:25:21.130Z]  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:15:04.154  	 Nvme0n1             :      10.01    8429.63      32.93       0.00     0.00   15174.53    6821.70   38368.35
00:15:04.154  
[2024-12-16T06:25:21.130Z]  ===================================================================================================================
00:15:04.154  
[2024-12-16T06:25:21.130Z]  Total                       :               8429.63      32.93       0.00     0.00   15174.53    6821.70   38368.35
00:15:04.154  0
00:15:04.154   06:25:20	-- target/nvmf_lvs_grow.sh@66 -- # killprocess 73471
00:15:04.154   06:25:20	-- common/autotest_common.sh@936 -- # '[' -z 73471 ']'
00:15:04.154   06:25:20	-- common/autotest_common.sh@940 -- # kill -0 73471
00:15:04.154    06:25:20	-- common/autotest_common.sh@941 -- # uname
00:15:04.154   06:25:21	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:15:04.154    06:25:21	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73471
00:15:04.154   06:25:21	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:15:04.154  killing process with pid 73471
00:15:04.154   06:25:21	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:15:04.154   06:25:21	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 73471'
00:15:04.154  Received shutdown signal, test time was about 10.000000 seconds
00:15:04.154  
00:15:04.154                                                                                                  Latency(us)
00:15:04.154  
[2024-12-16T06:25:21.130Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:15:04.154  
[2024-12-16T06:25:21.130Z]  ===================================================================================================================
00:15:04.154  
[2024-12-16T06:25:21.130Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:15:04.154   06:25:21	-- common/autotest_common.sh@955 -- # kill 73471
00:15:04.154   06:25:21	-- common/autotest_common.sh@960 -- # wait 73471
00:15:04.413   06:25:21	-- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:15:04.672    06:25:21	-- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters'
00:15:04.672    06:25:21	-- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 894061ee-d675-4c82-9153-a46b23757137
00:15:04.930   06:25:21	-- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61
00:15:04.930   06:25:21	-- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]]
00:15:04.930   06:25:21	-- target/nvmf_lvs_grow.sh@73 -- # kill -9 72879
00:15:04.930   06:25:21	-- target/nvmf_lvs_grow.sh@74 -- # wait 72879
00:15:04.930  /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 72879 Killed                  "${NVMF_APP[@]}" "$@"
00:15:04.930   06:25:21	-- target/nvmf_lvs_grow.sh@74 -- # true
00:15:04.930   06:25:21	-- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1
00:15:04.930   06:25:21	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:15:04.930   06:25:21	-- common/autotest_common.sh@722 -- # xtrace_disable
00:15:04.930   06:25:21	-- common/autotest_common.sh@10 -- # set +x
00:15:04.930   06:25:21	-- nvmf/common.sh@469 -- # nvmfpid=73669
00:15:04.930   06:25:21	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1
00:15:04.930   06:25:21	-- nvmf/common.sh@470 -- # waitforlisten 73669
00:15:04.930   06:25:21	-- common/autotest_common.sh@829 -- # '[' -z 73669 ']'
00:15:04.930   06:25:21	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:04.930   06:25:21	-- common/autotest_common.sh@834 -- # local max_retries=100
00:15:04.930  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:04.930   06:25:21	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:04.930   06:25:21	-- common/autotest_common.sh@838 -- # xtrace_disable
00:15:04.930   06:25:21	-- common/autotest_common.sh@10 -- # set +x
00:15:04.930  [2024-12-16 06:25:21.884819] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:15:04.930  [2024-12-16 06:25:21.884922] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:15:05.189  [2024-12-16 06:25:22.026183] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:05.189  [2024-12-16 06:25:22.138071] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:15:05.189  [2024-12-16 06:25:22.138215] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:15:05.189  [2024-12-16 06:25:22.138228] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:15:05.189  [2024-12-16 06:25:22.138237] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:15:05.189  [2024-12-16 06:25:22.138274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:15:06.124   06:25:22	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:15:06.124   06:25:22	-- common/autotest_common.sh@862 -- # return 0
00:15:06.124   06:25:22	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:15:06.124   06:25:22	-- common/autotest_common.sh@728 -- # xtrace_disable
00:15:06.124   06:25:22	-- common/autotest_common.sh@10 -- # set +x
00:15:06.124   06:25:22	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:15:06.124    06:25:22	-- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:15:06.383  [2024-12-16 06:25:23.170673] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore
00:15:06.383  [2024-12-16 06:25:23.171077] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0
00:15:06.383  [2024-12-16 06:25:23.171305] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1
00:15:06.383   06:25:23	-- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev
00:15:06.383   06:25:23	-- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 336524a6-5937-4c21-94cb-1a1f01252c43
00:15:06.383   06:25:23	-- common/autotest_common.sh@897 -- # local bdev_name=336524a6-5937-4c21-94cb-1a1f01252c43
00:15:06.383   06:25:23	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:06.383   06:25:23	-- common/autotest_common.sh@899 -- # local i
00:15:06.383   06:25:23	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:06.383   06:25:23	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:06.383   06:25:23	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine
00:15:06.641   06:25:23	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 336524a6-5937-4c21-94cb-1a1f01252c43 -t 2000
00:15:06.900  [
00:15:06.900    {
00:15:06.900      "aliases": [
00:15:06.900        "lvs/lvol"
00:15:06.900      ],
00:15:06.900      "assigned_rate_limits": {
00:15:06.900        "r_mbytes_per_sec": 0,
00:15:06.900        "rw_ios_per_sec": 0,
00:15:06.900        "rw_mbytes_per_sec": 0,
00:15:06.900        "w_mbytes_per_sec": 0
00:15:06.900      },
00:15:06.900      "block_size": 4096,
00:15:06.900      "claimed": false,
00:15:06.900      "driver_specific": {
00:15:06.900        "lvol": {
00:15:06.900          "base_bdev": "aio_bdev",
00:15:06.900          "clone": false,
00:15:06.900          "esnap_clone": false,
00:15:06.900          "lvol_store_uuid": "894061ee-d675-4c82-9153-a46b23757137",
00:15:06.900          "snapshot": false,
00:15:06.900          "thin_provision": false
00:15:06.900        }
00:15:06.900      },
00:15:06.900      "name": "336524a6-5937-4c21-94cb-1a1f01252c43",
00:15:06.900      "num_blocks": 38912,
00:15:06.900      "product_name": "Logical Volume",
00:15:06.900      "supported_io_types": {
00:15:06.900        "abort": false,
00:15:06.900        "compare": false,
00:15:06.900        "compare_and_write": false,
00:15:06.900        "flush": false,
00:15:06.900        "nvme_admin": false,
00:15:06.900        "nvme_io": false,
00:15:06.900        "read": true,
00:15:06.900        "reset": true,
00:15:06.900        "unmap": true,
00:15:06.900        "write": true,
00:15:06.900        "write_zeroes": true
00:15:06.900      },
00:15:06.900      "uuid": "336524a6-5937-4c21-94cb-1a1f01252c43",
00:15:06.900      "zoned": false
00:15:06.900    }
00:15:06.900  ]
00:15:06.900   06:25:23	-- common/autotest_common.sh@905 -- # return 0
00:15:06.900    06:25:23	-- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters'
00:15:06.900    06:25:23	-- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 894061ee-d675-4c82-9153-a46b23757137
00:15:07.159   06:25:23	-- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 ))
00:15:07.159    06:25:23	-- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 894061ee-d675-4c82-9153-a46b23757137
00:15:07.159    06:25:23	-- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters'
00:15:07.417   06:25:24	-- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 ))
00:15:07.417   06:25:24	-- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:15:07.677  [2024-12-16 06:25:24.416102] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs
00:15:07.677   06:25:24	-- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 894061ee-d675-4c82-9153-a46b23757137
00:15:07.677   06:25:24	-- common/autotest_common.sh@650 -- # local es=0
00:15:07.677   06:25:24	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 894061ee-d675-4c82-9153-a46b23757137
00:15:07.677   06:25:24	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:15:07.677   06:25:24	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:15:07.677    06:25:24	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:15:07.677   06:25:24	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:15:07.677    06:25:24	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:15:07.677   06:25:24	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:15:07.677   06:25:24	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:15:07.677   06:25:24	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:15:07.677   06:25:24	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 894061ee-d675-4c82-9153-a46b23757137
00:15:07.677  2024/12/16 06:25:24 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:894061ee-d675-4c82-9153-a46b23757137], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device
00:15:07.677  request:
00:15:07.677  {
00:15:07.677    "method": "bdev_lvol_get_lvstores",
00:15:07.677    "params": {
00:15:07.677      "uuid": "894061ee-d675-4c82-9153-a46b23757137"
00:15:07.677    }
00:15:07.677  }
00:15:07.677  Got JSON-RPC error response
00:15:07.677  GoRPCClient: error on JSON-RPC call
00:15:07.937   06:25:24	-- common/autotest_common.sh@653 -- # es=1
00:15:07.937   06:25:24	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:15:07.937   06:25:24	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:15:07.937   06:25:24	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:15:07.937   06:25:24	-- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:15:07.937  aio_bdev
00:15:07.937   06:25:24	-- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 336524a6-5937-4c21-94cb-1a1f01252c43
00:15:07.937   06:25:24	-- common/autotest_common.sh@897 -- # local bdev_name=336524a6-5937-4c21-94cb-1a1f01252c43
00:15:07.937   06:25:24	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:07.937   06:25:24	-- common/autotest_common.sh@899 -- # local i
00:15:07.937   06:25:24	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:07.937   06:25:24	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:07.937   06:25:24	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine
00:15:08.199   06:25:25	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 336524a6-5937-4c21-94cb-1a1f01252c43 -t 2000
00:15:08.460  [
00:15:08.460    {
00:15:08.460      "aliases": [
00:15:08.460        "lvs/lvol"
00:15:08.460      ],
00:15:08.460      "assigned_rate_limits": {
00:15:08.460        "r_mbytes_per_sec": 0,
00:15:08.460        "rw_ios_per_sec": 0,
00:15:08.460        "rw_mbytes_per_sec": 0,
00:15:08.460        "w_mbytes_per_sec": 0
00:15:08.460      },
00:15:08.460      "block_size": 4096,
00:15:08.460      "claimed": false,
00:15:08.460      "driver_specific": {
00:15:08.460        "lvol": {
00:15:08.460          "base_bdev": "aio_bdev",
00:15:08.460          "clone": false,
00:15:08.460          "esnap_clone": false,
00:15:08.460          "lvol_store_uuid": "894061ee-d675-4c82-9153-a46b23757137",
00:15:08.460          "snapshot": false,
00:15:08.460          "thin_provision": false
00:15:08.460        }
00:15:08.460      },
00:15:08.460      "name": "336524a6-5937-4c21-94cb-1a1f01252c43",
00:15:08.460      "num_blocks": 38912,
00:15:08.460      "product_name": "Logical Volume",
00:15:08.460      "supported_io_types": {
00:15:08.460        "abort": false,
00:15:08.460        "compare": false,
00:15:08.460        "compare_and_write": false,
00:15:08.460        "flush": false,
00:15:08.460        "nvme_admin": false,
00:15:08.460        "nvme_io": false,
00:15:08.460        "read": true,
00:15:08.460        "reset": true,
00:15:08.460        "unmap": true,
00:15:08.460        "write": true,
00:15:08.460        "write_zeroes": true
00:15:08.460      },
00:15:08.461      "uuid": "336524a6-5937-4c21-94cb-1a1f01252c43",
00:15:08.461      "zoned": false
00:15:08.461    }
00:15:08.461  ]
00:15:08.461   06:25:25	-- common/autotest_common.sh@905 -- # return 0
00:15:08.461    06:25:25	-- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 894061ee-d675-4c82-9153-a46b23757137
00:15:08.461    06:25:25	-- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters'
00:15:08.725   06:25:25	-- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 ))
00:15:08.725    06:25:25	-- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 894061ee-d675-4c82-9153-a46b23757137
00:15:08.725    06:25:25	-- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters'
00:15:08.984   06:25:25	-- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 ))
00:15:08.984   06:25:25	-- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 336524a6-5937-4c21-94cb-1a1f01252c43
00:15:09.243   06:25:26	-- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 894061ee-d675-4c82-9153-a46b23757137
00:15:09.502   06:25:26	-- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:15:09.502   06:25:26	-- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev
00:15:10.070  
00:15:10.070  real	0m19.824s
00:15:10.070  user	0m39.405s
00:15:10.070  sys	0m9.152s
00:15:10.070  ************************************
00:15:10.070  END TEST lvs_grow_dirty
00:15:10.070  ************************************
00:15:10.070   06:25:26	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:15:10.070   06:25:26	-- common/autotest_common.sh@10 -- # set +x
00:15:10.070   06:25:26	-- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0
00:15:10.070   06:25:26	-- common/autotest_common.sh@806 -- # type=--id
00:15:10.070   06:25:26	-- common/autotest_common.sh@807 -- # id=0
00:15:10.070   06:25:26	-- common/autotest_common.sh@808 -- # '[' --id = --pid ']'
00:15:10.070    06:25:26	-- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n'
00:15:10.070   06:25:26	-- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0
00:15:10.070   06:25:26	-- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]]
00:15:10.070   06:25:26	-- common/autotest_common.sh@818 -- # for n in $shm_files
00:15:10.070   06:25:26	-- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0
00:15:10.070  nvmf_trace.0
00:15:10.070   06:25:26	-- common/autotest_common.sh@821 -- # return 0
00:15:10.070   06:25:26	-- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini
00:15:10.070   06:25:26	-- nvmf/common.sh@476 -- # nvmfcleanup
00:15:10.070   06:25:26	-- nvmf/common.sh@116 -- # sync
00:15:10.638   06:25:27	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:15:10.638   06:25:27	-- nvmf/common.sh@119 -- # set +e
00:15:10.638   06:25:27	-- nvmf/common.sh@120 -- # for i in {1..20}
00:15:10.638   06:25:27	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:15:10.638  rmmod nvme_tcp
00:15:10.638  rmmod nvme_fabrics
00:15:10.638  rmmod nvme_keyring
00:15:10.638   06:25:27	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:15:10.638   06:25:27	-- nvmf/common.sh@123 -- # set -e
00:15:10.638   06:25:27	-- nvmf/common.sh@124 -- # return 0
00:15:10.638   06:25:27	-- nvmf/common.sh@477 -- # '[' -n 73669 ']'
00:15:10.638   06:25:27	-- nvmf/common.sh@478 -- # killprocess 73669
00:15:10.638   06:25:27	-- common/autotest_common.sh@936 -- # '[' -z 73669 ']'
00:15:10.638   06:25:27	-- common/autotest_common.sh@940 -- # kill -0 73669
00:15:10.638    06:25:27	-- common/autotest_common.sh@941 -- # uname
00:15:10.638   06:25:27	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:15:10.638    06:25:27	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73669
00:15:10.638   06:25:27	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:15:10.638  killing process with pid 73669
00:15:10.638   06:25:27	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:15:10.638   06:25:27	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 73669'
00:15:10.638   06:25:27	-- common/autotest_common.sh@955 -- # kill 73669
00:15:10.638   06:25:27	-- common/autotest_common.sh@960 -- # wait 73669
00:15:10.897   06:25:27	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:15:10.898   06:25:27	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:15:10.898   06:25:27	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:15:10.898   06:25:27	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:15:10.898   06:25:27	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:15:10.898   06:25:27	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:15:10.898   06:25:27	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:15:10.898    06:25:27	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:15:10.898   06:25:27	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:15:10.898  
00:15:10.898  real	0m40.721s
00:15:10.898  user	1m3.076s
00:15:10.898  sys	0m12.478s
00:15:10.898   06:25:27	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:15:10.898   06:25:27	-- common/autotest_common.sh@10 -- # set +x
00:15:10.898  ************************************
00:15:10.898  END TEST nvmf_lvs_grow
00:15:10.898  ************************************
00:15:11.157   06:25:27	-- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp
00:15:11.157   06:25:27	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:15:11.157   06:25:27	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:15:11.157   06:25:27	-- common/autotest_common.sh@10 -- # set +x
00:15:11.157  ************************************
00:15:11.157  START TEST nvmf_bdev_io_wait
00:15:11.157  ************************************
00:15:11.157   06:25:27	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp
00:15:11.157  * Looking for test storage...
00:15:11.157  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:15:11.157    06:25:28	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:15:11.157     06:25:28	-- common/autotest_common.sh@1690 -- # lcov --version
00:15:11.157     06:25:28	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:15:11.157    06:25:28	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:15:11.157    06:25:28	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:15:11.157    06:25:28	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:15:11.157    06:25:28	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:15:11.157    06:25:28	-- scripts/common.sh@335 -- # IFS=.-:
00:15:11.157    06:25:28	-- scripts/common.sh@335 -- # read -ra ver1
00:15:11.157    06:25:28	-- scripts/common.sh@336 -- # IFS=.-:
00:15:11.157    06:25:28	-- scripts/common.sh@336 -- # read -ra ver2
00:15:11.157    06:25:28	-- scripts/common.sh@337 -- # local 'op=<'
00:15:11.157    06:25:28	-- scripts/common.sh@339 -- # ver1_l=2
00:15:11.157    06:25:28	-- scripts/common.sh@340 -- # ver2_l=1
00:15:11.157    06:25:28	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:15:11.157    06:25:28	-- scripts/common.sh@343 -- # case "$op" in
00:15:11.157    06:25:28	-- scripts/common.sh@344 -- # : 1
00:15:11.157    06:25:28	-- scripts/common.sh@363 -- # (( v = 0 ))
00:15:11.157    06:25:28	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:15:11.157     06:25:28	-- scripts/common.sh@364 -- # decimal 1
00:15:11.157     06:25:28	-- scripts/common.sh@352 -- # local d=1
00:15:11.157     06:25:28	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:11.157     06:25:28	-- scripts/common.sh@354 -- # echo 1
00:15:11.158    06:25:28	-- scripts/common.sh@364 -- # ver1[v]=1
00:15:11.158     06:25:28	-- scripts/common.sh@365 -- # decimal 2
00:15:11.158     06:25:28	-- scripts/common.sh@352 -- # local d=2
00:15:11.158     06:25:28	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:15:11.158     06:25:28	-- scripts/common.sh@354 -- # echo 2
00:15:11.158    06:25:28	-- scripts/common.sh@365 -- # ver2[v]=2
00:15:11.158    06:25:28	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:15:11.158    06:25:28	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:15:11.158    06:25:28	-- scripts/common.sh@367 -- # return 0
00:15:11.158    06:25:28	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:15:11.158    06:25:28	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:15:11.158  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:11.158  		--rc genhtml_branch_coverage=1
00:15:11.158  		--rc genhtml_function_coverage=1
00:15:11.158  		--rc genhtml_legend=1
00:15:11.158  		--rc geninfo_all_blocks=1
00:15:11.158  		--rc geninfo_unexecuted_blocks=1
00:15:11.158  		
00:15:11.158  		'
00:15:11.158    06:25:28	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:15:11.158  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:11.158  		--rc genhtml_branch_coverage=1
00:15:11.158  		--rc genhtml_function_coverage=1
00:15:11.158  		--rc genhtml_legend=1
00:15:11.158  		--rc geninfo_all_blocks=1
00:15:11.158  		--rc geninfo_unexecuted_blocks=1
00:15:11.158  		
00:15:11.158  		'
00:15:11.158    06:25:28	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:15:11.158  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:11.158  		--rc genhtml_branch_coverage=1
00:15:11.158  		--rc genhtml_function_coverage=1
00:15:11.158  		--rc genhtml_legend=1
00:15:11.158  		--rc geninfo_all_blocks=1
00:15:11.158  		--rc geninfo_unexecuted_blocks=1
00:15:11.158  		
00:15:11.158  		'
00:15:11.158    06:25:28	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:15:11.158  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:11.158  		--rc genhtml_branch_coverage=1
00:15:11.158  		--rc genhtml_function_coverage=1
00:15:11.158  		--rc genhtml_legend=1
00:15:11.158  		--rc geninfo_all_blocks=1
00:15:11.158  		--rc geninfo_unexecuted_blocks=1
00:15:11.158  		
00:15:11.158  		'
00:15:11.158   06:25:28	-- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:15:11.158     06:25:28	-- nvmf/common.sh@7 -- # uname -s
00:15:11.158    06:25:28	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:15:11.158    06:25:28	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:15:11.158    06:25:28	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:15:11.158    06:25:28	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:15:11.158    06:25:28	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:15:11.158    06:25:28	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:15:11.158    06:25:28	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:15:11.158    06:25:28	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:15:11.158    06:25:28	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:15:11.158     06:25:28	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:15:11.158    06:25:28	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:15:11.158    06:25:28	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:15:11.158    06:25:28	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:15:11.158    06:25:28	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:15:11.158    06:25:28	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:15:11.158    06:25:28	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:15:11.158     06:25:28	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:15:11.158     06:25:28	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:15:11.158     06:25:28	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:15:11.158      06:25:28	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:11.158      06:25:28	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:11.158      06:25:28	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:11.158      06:25:28	-- paths/export.sh@5 -- # export PATH
00:15:11.417      06:25:28	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:11.417    06:25:28	-- nvmf/common.sh@46 -- # : 0
00:15:11.417    06:25:28	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:15:11.417    06:25:28	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:15:11.417    06:25:28	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:15:11.417    06:25:28	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:15:11.417    06:25:28	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:15:11.417    06:25:28	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:15:11.417    06:25:28	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:15:11.417    06:25:28	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:15:11.417   06:25:28	-- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64
00:15:11.417   06:25:28	-- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:15:11.417   06:25:28	-- target/bdev_io_wait.sh@14 -- # nvmftestinit
00:15:11.417   06:25:28	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:15:11.417   06:25:28	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:15:11.417   06:25:28	-- nvmf/common.sh@436 -- # prepare_net_devs
00:15:11.417   06:25:28	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:15:11.417   06:25:28	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:15:11.417   06:25:28	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:15:11.417   06:25:28	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:15:11.417    06:25:28	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:15:11.417   06:25:28	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:15:11.417   06:25:28	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:15:11.417   06:25:28	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:15:11.417   06:25:28	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:15:11.418   06:25:28	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:15:11.418   06:25:28	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:15:11.418   06:25:28	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:15:11.418   06:25:28	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:15:11.418   06:25:28	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:15:11.418   06:25:28	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:15:11.418   06:25:28	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:15:11.418   06:25:28	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:15:11.418   06:25:28	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:15:11.418   06:25:28	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:15:11.418   06:25:28	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:15:11.418   06:25:28	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:15:11.418   06:25:28	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:15:11.418   06:25:28	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:15:11.418   06:25:28	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:15:11.418   06:25:28	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:15:11.418  Cannot find device "nvmf_tgt_br"
00:15:11.418   06:25:28	-- nvmf/common.sh@154 -- # true
00:15:11.418   06:25:28	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:15:11.418  Cannot find device "nvmf_tgt_br2"
00:15:11.418   06:25:28	-- nvmf/common.sh@155 -- # true
00:15:11.418   06:25:28	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:15:11.418   06:25:28	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:15:11.418  Cannot find device "nvmf_tgt_br"
00:15:11.418   06:25:28	-- nvmf/common.sh@157 -- # true
00:15:11.418   06:25:28	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:15:11.418  Cannot find device "nvmf_tgt_br2"
00:15:11.418   06:25:28	-- nvmf/common.sh@158 -- # true
00:15:11.418   06:25:28	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:15:11.418   06:25:28	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:15:11.418   06:25:28	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:15:11.418  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:15:11.418   06:25:28	-- nvmf/common.sh@161 -- # true
00:15:11.418   06:25:28	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:15:11.418  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:15:11.418   06:25:28	-- nvmf/common.sh@162 -- # true
00:15:11.418   06:25:28	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:15:11.418   06:25:28	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:15:11.418   06:25:28	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:15:11.418   06:25:28	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:15:11.418   06:25:28	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:15:11.418   06:25:28	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:15:11.418   06:25:28	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:15:11.418   06:25:28	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:15:11.418   06:25:28	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:15:11.418   06:25:28	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:15:11.418   06:25:28	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:15:11.418   06:25:28	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:15:11.418   06:25:28	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:15:11.418   06:25:28	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:15:11.418   06:25:28	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:15:11.677   06:25:28	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:15:11.677   06:25:28	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:15:11.677   06:25:28	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:15:11.677   06:25:28	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:15:11.677   06:25:28	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:15:11.677   06:25:28	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:15:11.677   06:25:28	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:15:11.677   06:25:28	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:15:11.677   06:25:28	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:15:11.677  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:15:11.677  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms
00:15:11.677  
00:15:11.677  --- 10.0.0.2 ping statistics ---
00:15:11.677  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:15:11.677  rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms
00:15:11.677   06:25:28	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:15:11.677  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:15:11.677  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms
00:15:11.677  
00:15:11.677  --- 10.0.0.3 ping statistics ---
00:15:11.677  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:15:11.677  rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms
00:15:11.677   06:25:28	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:15:11.677  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:15:11.677  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms
00:15:11.677  
00:15:11.677  --- 10.0.0.1 ping statistics ---
00:15:11.677  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:15:11.677  rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms
00:15:11.677   06:25:28	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:15:11.677   06:25:28	-- nvmf/common.sh@421 -- # return 0
00:15:11.677   06:25:28	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:15:11.677   06:25:28	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:15:11.677   06:25:28	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:15:11.677   06:25:28	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:15:11.677   06:25:28	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:15:11.677   06:25:28	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:15:11.677   06:25:28	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:15:11.677   06:25:28	-- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc
00:15:11.677   06:25:28	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:15:11.677   06:25:28	-- common/autotest_common.sh@722 -- # xtrace_disable
00:15:11.677   06:25:28	-- common/autotest_common.sh@10 -- # set +x
00:15:11.677   06:25:28	-- nvmf/common.sh@469 -- # nvmfpid=74094
00:15:11.677   06:25:28	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc
00:15:11.677   06:25:28	-- nvmf/common.sh@470 -- # waitforlisten 74094
00:15:11.677   06:25:28	-- common/autotest_common.sh@829 -- # '[' -z 74094 ']'
00:15:11.677   06:25:28	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:11.677  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:11.677   06:25:28	-- common/autotest_common.sh@834 -- # local max_retries=100
00:15:11.677   06:25:28	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:11.677   06:25:28	-- common/autotest_common.sh@838 -- # xtrace_disable
00:15:11.677   06:25:28	-- common/autotest_common.sh@10 -- # set +x
00:15:11.677  [2024-12-16 06:25:28.582318] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:15:11.677  [2024-12-16 06:25:28.582399] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:15:11.936  [2024-12-16 06:25:28.716307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:15:11.936  [2024-12-16 06:25:28.808457] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:15:11.936  [2024-12-16 06:25:28.808625] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:15:11.936  [2024-12-16 06:25:28.808642] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:15:11.936  [2024-12-16 06:25:28.808651] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:15:11.936  [2024-12-16 06:25:28.808813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:15:11.936  [2024-12-16 06:25:28.810537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:15:11.936  [2024-12-16 06:25:28.810675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:15:11.936  [2024-12-16 06:25:28.810745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:15:12.875   06:25:29	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:15:12.875   06:25:29	-- common/autotest_common.sh@862 -- # return 0
00:15:12.875   06:25:29	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:15:12.875   06:25:29	-- common/autotest_common.sh@728 -- # xtrace_disable
00:15:12.875   06:25:29	-- common/autotest_common.sh@10 -- # set +x
00:15:12.875   06:25:29	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:15:12.875   06:25:29	-- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1
00:15:12.875   06:25:29	-- common/autotest_common.sh@561 -- # xtrace_disable
00:15:12.875   06:25:29	-- common/autotest_common.sh@10 -- # set +x
00:15:12.875   06:25:29	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:12.875   06:25:29	-- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init
00:15:12.875   06:25:29	-- common/autotest_common.sh@561 -- # xtrace_disable
00:15:12.875   06:25:29	-- common/autotest_common.sh@10 -- # set +x
00:15:12.875   06:25:29	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:12.875   06:25:29	-- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:15:12.875   06:25:29	-- common/autotest_common.sh@561 -- # xtrace_disable
00:15:12.875   06:25:29	-- common/autotest_common.sh@10 -- # set +x
00:15:12.875  [2024-12-16 06:25:29.693388] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:15:12.875   06:25:29	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:12.876   06:25:29	-- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:15:12.876   06:25:29	-- common/autotest_common.sh@561 -- # xtrace_disable
00:15:12.876   06:25:29	-- common/autotest_common.sh@10 -- # set +x
00:15:12.876  Malloc0
00:15:12.876   06:25:29	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:12.876   06:25:29	-- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:15:12.876   06:25:29	-- common/autotest_common.sh@561 -- # xtrace_disable
00:15:12.876   06:25:29	-- common/autotest_common.sh@10 -- # set +x
00:15:12.876   06:25:29	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:12.876   06:25:29	-- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:15:12.876   06:25:29	-- common/autotest_common.sh@561 -- # xtrace_disable
00:15:12.876   06:25:29	-- common/autotest_common.sh@10 -- # set +x
00:15:12.876   06:25:29	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:12.876   06:25:29	-- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:15:12.876   06:25:29	-- common/autotest_common.sh@561 -- # xtrace_disable
00:15:12.876   06:25:29	-- common/autotest_common.sh@10 -- # set +x
00:15:12.876  [2024-12-16 06:25:29.759183] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:15:12.876   06:25:29	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:12.876   06:25:29	-- target/bdev_io_wait.sh@28 -- # WRITE_PID=74147
00:15:12.876    06:25:29	-- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json
00:15:12.876   06:25:29	-- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256
00:15:12.876   06:25:29	-- target/bdev_io_wait.sh@30 -- # READ_PID=74149
00:15:12.876    06:25:29	-- nvmf/common.sh@520 -- # config=()
00:15:12.876    06:25:29	-- nvmf/common.sh@520 -- # local subsystem config
00:15:12.876    06:25:29	-- nvmf/common.sh@522 -- # for subsystem in "${@:-1}"
00:15:12.876    06:25:29	-- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF
00:15:12.876  {
00:15:12.876    "params": {
00:15:12.876      "name": "Nvme$subsystem",
00:15:12.876      "trtype": "$TEST_TRANSPORT",
00:15:12.876      "traddr": "$NVMF_FIRST_TARGET_IP",
00:15:12.876      "adrfam": "ipv4",
00:15:12.876      "trsvcid": "$NVMF_PORT",
00:15:12.876      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:15:12.876      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:15:12.876      "hdgst": ${hdgst:-false},
00:15:12.876      "ddgst": ${ddgst:-false}
00:15:12.876    },
00:15:12.876    "method": "bdev_nvme_attach_controller"
00:15:12.876  }
00:15:12.876  EOF
00:15:12.876  )")
00:15:12.876    06:25:29	-- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json
00:15:12.876   06:25:29	-- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256
00:15:12.876    06:25:29	-- nvmf/common.sh@520 -- # config=()
00:15:12.876    06:25:29	-- nvmf/common.sh@520 -- # local subsystem config
00:15:12.876    06:25:29	-- nvmf/common.sh@522 -- # for subsystem in "${@:-1}"
00:15:12.876   06:25:29	-- target/bdev_io_wait.sh@32 -- # FLUSH_PID=74151
00:15:12.876    06:25:29	-- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF
00:15:12.876  {
00:15:12.876    "params": {
00:15:12.876      "name": "Nvme$subsystem",
00:15:12.876      "trtype": "$TEST_TRANSPORT",
00:15:12.876      "traddr": "$NVMF_FIRST_TARGET_IP",
00:15:12.876      "adrfam": "ipv4",
00:15:12.876      "trsvcid": "$NVMF_PORT",
00:15:12.876      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:15:12.876      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:15:12.876      "hdgst": ${hdgst:-false},
00:15:12.876      "ddgst": ${ddgst:-false}
00:15:12.876    },
00:15:12.876    "method": "bdev_nvme_attach_controller"
00:15:12.876  }
00:15:12.876  EOF
00:15:12.876  )")
00:15:12.876     06:25:29	-- nvmf/common.sh@542 -- # cat
00:15:12.876   06:25:29	-- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256
00:15:12.876   06:25:29	-- target/bdev_io_wait.sh@34 -- # UNMAP_PID=74155
00:15:12.876     06:25:29	-- nvmf/common.sh@542 -- # cat
00:15:12.876    06:25:29	-- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json
00:15:12.876   06:25:29	-- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256
00:15:12.876    06:25:29	-- nvmf/common.sh@520 -- # config=()
00:15:12.876    06:25:29	-- nvmf/common.sh@520 -- # local subsystem config
00:15:12.876    06:25:29	-- nvmf/common.sh@522 -- # for subsystem in "${@:-1}"
00:15:12.876    06:25:29	-- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF
00:15:12.876  {
00:15:12.876    "params": {
00:15:12.876      "name": "Nvme$subsystem",
00:15:12.876      "trtype": "$TEST_TRANSPORT",
00:15:12.876      "traddr": "$NVMF_FIRST_TARGET_IP",
00:15:12.876      "adrfam": "ipv4",
00:15:12.876      "trsvcid": "$NVMF_PORT",
00:15:12.876      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:15:12.876      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:15:12.876      "hdgst": ${hdgst:-false},
00:15:12.876      "ddgst": ${ddgst:-false}
00:15:12.876    },
00:15:12.876    "method": "bdev_nvme_attach_controller"
00:15:12.876  }
00:15:12.876  EOF
00:15:12.876  )")
00:15:12.876    06:25:29	-- nvmf/common.sh@544 -- # jq .
00:15:12.876    06:25:29	-- nvmf/common.sh@544 -- # jq .
00:15:12.876     06:25:29	-- nvmf/common.sh@542 -- # cat
00:15:12.876    06:25:29	-- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json
00:15:12.876    06:25:29	-- nvmf/common.sh@520 -- # config=()
00:15:12.876    06:25:29	-- nvmf/common.sh@520 -- # local subsystem config
00:15:12.876    06:25:29	-- nvmf/common.sh@522 -- # for subsystem in "${@:-1}"
00:15:12.876    06:25:29	-- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF
00:15:12.876  {
00:15:12.876    "params": {
00:15:12.876      "name": "Nvme$subsystem",
00:15:12.876      "trtype": "$TEST_TRANSPORT",
00:15:12.876      "traddr": "$NVMF_FIRST_TARGET_IP",
00:15:12.876      "adrfam": "ipv4",
00:15:12.876      "trsvcid": "$NVMF_PORT",
00:15:12.876      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:15:12.876      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:15:12.876      "hdgst": ${hdgst:-false},
00:15:12.876      "ddgst": ${ddgst:-false}
00:15:12.876    },
00:15:12.876    "method": "bdev_nvme_attach_controller"
00:15:12.876  }
00:15:12.877  EOF
00:15:12.877  )")
00:15:12.877   06:25:29	-- target/bdev_io_wait.sh@35 -- # sync
00:15:12.877     06:25:29	-- nvmf/common.sh@545 -- # IFS=,
00:15:12.877     06:25:29	-- nvmf/common.sh@542 -- # cat
00:15:12.877     06:25:29	-- nvmf/common.sh@545 -- # IFS=,
00:15:12.877     06:25:29	-- nvmf/common.sh@546 -- # printf '%s\n' '{
00:15:12.877    "params": {
00:15:12.877      "name": "Nvme1",
00:15:12.877      "trtype": "tcp",
00:15:12.877      "traddr": "10.0.0.2",
00:15:12.877      "adrfam": "ipv4",
00:15:12.877      "trsvcid": "4420",
00:15:12.877      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:15:12.877      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:15:12.877      "hdgst": false,
00:15:12.877      "ddgst": false
00:15:12.877    },
00:15:12.877    "method": "bdev_nvme_attach_controller"
00:15:12.877  }'
00:15:12.877     06:25:29	-- nvmf/common.sh@546 -- # printf '%s\n' '{
00:15:12.877    "params": {
00:15:12.877      "name": "Nvme1",
00:15:12.877      "trtype": "tcp",
00:15:12.877      "traddr": "10.0.0.2",
00:15:12.877      "adrfam": "ipv4",
00:15:12.877      "trsvcid": "4420",
00:15:12.877      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:15:12.877      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:15:12.877      "hdgst": false,
00:15:12.877      "ddgst": false
00:15:12.877    },
00:15:12.877    "method": "bdev_nvme_attach_controller"
00:15:12.877  }'
00:15:12.877    06:25:29	-- nvmf/common.sh@544 -- # jq .
00:15:12.877    06:25:29	-- nvmf/common.sh@544 -- # jq .
00:15:12.877     06:25:29	-- nvmf/common.sh@545 -- # IFS=,
00:15:12.877     06:25:29	-- nvmf/common.sh@546 -- # printf '%s\n' '{
00:15:12.877    "params": {
00:15:12.877      "name": "Nvme1",
00:15:12.877      "trtype": "tcp",
00:15:12.877      "traddr": "10.0.0.2",
00:15:12.877      "adrfam": "ipv4",
00:15:12.877      "trsvcid": "4420",
00:15:12.877      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:15:12.877      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:15:12.877      "hdgst": false,
00:15:12.877      "ddgst": false
00:15:12.877    },
00:15:12.877    "method": "bdev_nvme_attach_controller"
00:15:12.877  }'
00:15:12.877     06:25:29	-- nvmf/common.sh@545 -- # IFS=,
00:15:12.877     06:25:29	-- nvmf/common.sh@546 -- # printf '%s\n' '{
00:15:12.877    "params": {
00:15:12.877      "name": "Nvme1",
00:15:12.877      "trtype": "tcp",
00:15:12.877      "traddr": "10.0.0.2",
00:15:12.877      "adrfam": "ipv4",
00:15:12.877      "trsvcid": "4420",
00:15:12.877      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:15:12.877      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:15:12.877      "hdgst": false,
00:15:12.877      "ddgst": false
00:15:12.877    },
00:15:12.877    "method": "bdev_nvme_attach_controller"
00:15:12.877  }'
00:15:12.877  [2024-12-16 06:25:29.828872] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:15:12.877  [2024-12-16 06:25:29.828950] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ]
00:15:12.877   06:25:29	-- target/bdev_io_wait.sh@37 -- # wait 74147
00:15:12.877  [2024-12-16 06:25:29.842098] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:15:12.877  [2024-12-16 06:25:29.842160] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ]
00:15:12.877  [2024-12-16 06:25:29.846335] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:15:12.877  [2024-12-16 06:25:29.846420] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ]
00:15:13.137  [2024-12-16 06:25:29.851438] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:15:13.137  [2024-12-16 06:25:29.851880] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ]
00:15:13.137  [2024-12-16 06:25:30.051334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:13.397  [2024-12-16 06:25:30.114445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:13.397  [2024-12-16 06:25:30.156318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6
00:15:13.397  [2024-12-16 06:25:30.192091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:13.397  [2024-12-16 06:25:30.217702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4
00:15:13.397  [2024-12-16 06:25:30.267450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:13.397  Running I/O for 1 seconds...
00:15:13.397  [2024-12-16 06:25:30.292995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5
00:15:13.397  Running I/O for 1 seconds...
00:15:13.656  [2024-12-16 06:25:30.389836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7
00:15:13.656  Running I/O for 1 seconds...
00:15:13.656  Running I/O for 1 seconds...
00:15:14.594  
00:15:14.594                                                                                                  Latency(us)
00:15:14.594  
[2024-12-16T06:25:31.570Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:15:14.594  
[2024-12-16T06:25:31.570Z]  Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096)
00:15:14.594  	 Nvme1n1             :       1.00  229350.86     895.90       0.00     0.00     555.82     218.76     927.19
00:15:14.594  
[2024-12-16T06:25:31.570Z]  ===================================================================================================================
00:15:14.594  
[2024-12-16T06:25:31.570Z]  Total                       :             229350.86     895.90       0.00     0.00     555.82     218.76     927.19
00:15:14.594  
00:15:14.594                                                                                                  Latency(us)
00:15:14.594  
[2024-12-16T06:25:31.570Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:15:14.594  
[2024-12-16T06:25:31.570Z]  Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096)
00:15:14.594  	 Nvme1n1             :       1.01   11075.70      43.26       0.00     0.00   11514.57    6315.29   19541.64
00:15:14.594  
[2024-12-16T06:25:31.570Z]  ===================================================================================================================
00:15:14.594  
[2024-12-16T06:25:31.570Z]  Total                       :              11075.70      43.26       0.00     0.00   11514.57    6315.29   19541.64
00:15:14.594  
00:15:14.594                                                                                                  Latency(us)
00:15:14.594  
[2024-12-16T06:25:31.570Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:15:14.594  
[2024-12-16T06:25:31.570Z]  Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096)
00:15:14.594  	 Nvme1n1             :       1.01    7523.29      29.39       0.00     0.00   16926.10    9651.67   33602.09
00:15:14.594  
[2024-12-16T06:25:31.570Z]  ===================================================================================================================
00:15:14.594  
[2024-12-16T06:25:31.570Z]  Total                       :               7523.29      29.39       0.00     0.00   16926.10    9651.67   33602.09
00:15:14.853  
00:15:14.853                                                                                                  Latency(us)
00:15:14.853  
[2024-12-16T06:25:31.829Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:15:14.853  
[2024-12-16T06:25:31.829Z]  Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096)
00:15:14.853  	 Nvme1n1             :       1.01    8749.61      34.18       0.00     0.00   14570.36    3381.06   25022.84
00:15:14.853  
[2024-12-16T06:25:31.829Z]  ===================================================================================================================
00:15:14.853  
[2024-12-16T06:25:31.829Z]  Total                       :               8749.61      34.18       0.00     0.00   14570.36    3381.06   25022.84
00:15:14.853   06:25:31	-- target/bdev_io_wait.sh@38 -- # wait 74149
00:15:14.853   06:25:31	-- target/bdev_io_wait.sh@39 -- # wait 74151
00:15:14.853   06:25:31	-- target/bdev_io_wait.sh@40 -- # wait 74155
00:15:15.112   06:25:31	-- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:15:15.112   06:25:31	-- common/autotest_common.sh@561 -- # xtrace_disable
00:15:15.112   06:25:31	-- common/autotest_common.sh@10 -- # set +x
00:15:15.112   06:25:31	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:15.112   06:25:31	-- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT
00:15:15.112   06:25:31	-- target/bdev_io_wait.sh@46 -- # nvmftestfini
00:15:15.112   06:25:31	-- nvmf/common.sh@476 -- # nvmfcleanup
00:15:15.112   06:25:31	-- nvmf/common.sh@116 -- # sync
00:15:15.112   06:25:31	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:15:15.112   06:25:31	-- nvmf/common.sh@119 -- # set +e
00:15:15.113   06:25:31	-- nvmf/common.sh@120 -- # for i in {1..20}
00:15:15.113   06:25:31	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:15:15.113  rmmod nvme_tcp
00:15:15.113  rmmod nvme_fabrics
00:15:15.113  rmmod nvme_keyring
00:15:15.113   06:25:32	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:15:15.113   06:25:32	-- nvmf/common.sh@123 -- # set -e
00:15:15.113   06:25:32	-- nvmf/common.sh@124 -- # return 0
00:15:15.113   06:25:32	-- nvmf/common.sh@477 -- # '[' -n 74094 ']'
00:15:15.113   06:25:32	-- nvmf/common.sh@478 -- # killprocess 74094
00:15:15.113   06:25:32	-- common/autotest_common.sh@936 -- # '[' -z 74094 ']'
00:15:15.113   06:25:32	-- common/autotest_common.sh@940 -- # kill -0 74094
00:15:15.113    06:25:32	-- common/autotest_common.sh@941 -- # uname
00:15:15.113   06:25:32	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:15:15.113    06:25:32	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74094
00:15:15.113  killing process with pid 74094
00:15:15.113   06:25:32	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:15:15.113   06:25:32	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:15:15.113   06:25:32	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 74094'
00:15:15.113   06:25:32	-- common/autotest_common.sh@955 -- # kill 74094
00:15:15.113   06:25:32	-- common/autotest_common.sh@960 -- # wait 74094
00:15:15.681   06:25:32	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:15:15.681   06:25:32	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:15:15.681   06:25:32	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:15:15.681   06:25:32	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:15:15.681   06:25:32	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:15:15.681   06:25:32	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:15:15.681   06:25:32	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:15:15.681    06:25:32	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:15:15.681   06:25:32	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:15:15.681  
00:15:15.681  real	0m4.465s
00:15:15.681  user	0m18.973s
00:15:15.681  sys	0m2.242s
00:15:15.681   06:25:32	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:15:15.681   06:25:32	-- common/autotest_common.sh@10 -- # set +x
00:15:15.681  ************************************
00:15:15.681  END TEST nvmf_bdev_io_wait
00:15:15.681  ************************************
00:15:15.681   06:25:32	-- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp
00:15:15.681   06:25:32	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:15:15.681   06:25:32	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:15:15.681   06:25:32	-- common/autotest_common.sh@10 -- # set +x
00:15:15.681  ************************************
00:15:15.681  START TEST nvmf_queue_depth
00:15:15.681  ************************************
00:15:15.681   06:25:32	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp
00:15:15.681  * Looking for test storage...
00:15:15.681  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:15:15.681    06:25:32	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:15:15.681     06:25:32	-- common/autotest_common.sh@1690 -- # lcov --version
00:15:15.681     06:25:32	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:15:15.681    06:25:32	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:15:15.681    06:25:32	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:15:15.681    06:25:32	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:15:15.681    06:25:32	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:15:15.681    06:25:32	-- scripts/common.sh@335 -- # IFS=.-:
00:15:15.681    06:25:32	-- scripts/common.sh@335 -- # read -ra ver1
00:15:15.681    06:25:32	-- scripts/common.sh@336 -- # IFS=.-:
00:15:15.681    06:25:32	-- scripts/common.sh@336 -- # read -ra ver2
00:15:15.681    06:25:32	-- scripts/common.sh@337 -- # local 'op=<'
00:15:15.681    06:25:32	-- scripts/common.sh@339 -- # ver1_l=2
00:15:15.681    06:25:32	-- scripts/common.sh@340 -- # ver2_l=1
00:15:15.681    06:25:32	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:15:15.681    06:25:32	-- scripts/common.sh@343 -- # case "$op" in
00:15:15.681    06:25:32	-- scripts/common.sh@344 -- # : 1
00:15:15.681    06:25:32	-- scripts/common.sh@363 -- # (( v = 0 ))
00:15:15.681    06:25:32	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:15:15.681     06:25:32	-- scripts/common.sh@364 -- # decimal 1
00:15:15.681     06:25:32	-- scripts/common.sh@352 -- # local d=1
00:15:15.681     06:25:32	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:15.681     06:25:32	-- scripts/common.sh@354 -- # echo 1
00:15:15.681    06:25:32	-- scripts/common.sh@364 -- # ver1[v]=1
00:15:15.681     06:25:32	-- scripts/common.sh@365 -- # decimal 2
00:15:15.681     06:25:32	-- scripts/common.sh@352 -- # local d=2
00:15:15.681     06:25:32	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:15:15.681     06:25:32	-- scripts/common.sh@354 -- # echo 2
00:15:15.681    06:25:32	-- scripts/common.sh@365 -- # ver2[v]=2
00:15:15.681    06:25:32	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:15:15.681    06:25:32	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:15:15.681    06:25:32	-- scripts/common.sh@367 -- # return 0
00:15:15.681    06:25:32	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:15:15.681    06:25:32	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:15:15.681  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:15.681  		--rc genhtml_branch_coverage=1
00:15:15.681  		--rc genhtml_function_coverage=1
00:15:15.681  		--rc genhtml_legend=1
00:15:15.681  		--rc geninfo_all_blocks=1
00:15:15.681  		--rc geninfo_unexecuted_blocks=1
00:15:15.681  		
00:15:15.681  		'
00:15:15.681    06:25:32	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:15:15.681  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:15.681  		--rc genhtml_branch_coverage=1
00:15:15.681  		--rc genhtml_function_coverage=1
00:15:15.681  		--rc genhtml_legend=1
00:15:15.681  		--rc geninfo_all_blocks=1
00:15:15.681  		--rc geninfo_unexecuted_blocks=1
00:15:15.681  		
00:15:15.681  		'
00:15:15.681    06:25:32	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:15:15.681  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:15.681  		--rc genhtml_branch_coverage=1
00:15:15.681  		--rc genhtml_function_coverage=1
00:15:15.681  		--rc genhtml_legend=1
00:15:15.681  		--rc geninfo_all_blocks=1
00:15:15.681  		--rc geninfo_unexecuted_blocks=1
00:15:15.681  		
00:15:15.681  		'
00:15:15.681    06:25:32	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:15:15.681  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:15.681  		--rc genhtml_branch_coverage=1
00:15:15.681  		--rc genhtml_function_coverage=1
00:15:15.681  		--rc genhtml_legend=1
00:15:15.681  		--rc geninfo_all_blocks=1
00:15:15.681  		--rc geninfo_unexecuted_blocks=1
00:15:15.681  		
00:15:15.681  		'
00:15:15.681   06:25:32	-- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:15:15.681     06:25:32	-- nvmf/common.sh@7 -- # uname -s
00:15:15.681    06:25:32	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:15:15.681    06:25:32	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:15:15.681    06:25:32	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:15:15.681    06:25:32	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:15:15.681    06:25:32	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:15:15.681    06:25:32	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:15:15.681    06:25:32	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:15:15.681    06:25:32	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:15:15.681    06:25:32	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:15:15.681     06:25:32	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:15:15.681    06:25:32	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:15:15.681    06:25:32	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:15:15.681    06:25:32	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:15:15.681    06:25:32	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:15:15.681    06:25:32	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:15:15.681    06:25:32	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:15:15.681     06:25:32	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:15:15.681     06:25:32	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:15:15.681     06:25:32	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:15:15.682      06:25:32	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:15.682      06:25:32	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:15.682      06:25:32	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:15.682      06:25:32	-- paths/export.sh@5 -- # export PATH
00:15:15.682      06:25:32	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:15.682    06:25:32	-- nvmf/common.sh@46 -- # : 0
00:15:15.682    06:25:32	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:15:15.682    06:25:32	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:15:15.682    06:25:32	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:15:15.682    06:25:32	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:15:15.682    06:25:32	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:15:15.682    06:25:32	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:15:15.682    06:25:32	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:15:15.682    06:25:32	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:15:15.682   06:25:32	-- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64
00:15:15.682   06:25:32	-- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512
00:15:15.682   06:25:32	-- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:15:15.682   06:25:32	-- target/queue_depth.sh@19 -- # nvmftestinit
00:15:15.682   06:25:32	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:15:15.682   06:25:32	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:15:15.682   06:25:32	-- nvmf/common.sh@436 -- # prepare_net_devs
00:15:15.682   06:25:32	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:15:15.682   06:25:32	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:15:15.682   06:25:32	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:15:15.682   06:25:32	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:15:15.682    06:25:32	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:15:15.682   06:25:32	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:15:15.682   06:25:32	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:15:15.682   06:25:32	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:15:15.682   06:25:32	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:15:15.682   06:25:32	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:15:15.682   06:25:32	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:15:15.682   06:25:32	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:15:15.682   06:25:32	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:15:15.682   06:25:32	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:15:15.682   06:25:32	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:15:15.682   06:25:32	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:15:15.682   06:25:32	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:15:15.682   06:25:32	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:15:15.682   06:25:32	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:15:15.682   06:25:32	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:15:15.682   06:25:32	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:15:15.682   06:25:32	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:15:15.682   06:25:32	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:15:15.682   06:25:32	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:15:15.682   06:25:32	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:15:15.682  Cannot find device "nvmf_tgt_br"
00:15:15.941   06:25:32	-- nvmf/common.sh@154 -- # true
00:15:15.941   06:25:32	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:15:15.941  Cannot find device "nvmf_tgt_br2"
00:15:15.941   06:25:32	-- nvmf/common.sh@155 -- # true
00:15:15.941   06:25:32	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:15:15.941   06:25:32	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:15:15.941  Cannot find device "nvmf_tgt_br"
00:15:15.941   06:25:32	-- nvmf/common.sh@157 -- # true
00:15:15.941   06:25:32	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:15:15.941  Cannot find device "nvmf_tgt_br2"
00:15:15.941   06:25:32	-- nvmf/common.sh@158 -- # true
00:15:15.941   06:25:32	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:15:15.941   06:25:32	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:15:15.941   06:25:32	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:15:15.941  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:15:15.941   06:25:32	-- nvmf/common.sh@161 -- # true
00:15:15.941   06:25:32	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:15:15.941  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:15:15.941   06:25:32	-- nvmf/common.sh@162 -- # true
00:15:15.941   06:25:32	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:15:15.941   06:25:32	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:15:15.941   06:25:32	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:15:15.941   06:25:32	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:15:15.941   06:25:32	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:15:15.941   06:25:32	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:15:15.941   06:25:32	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:15:15.941   06:25:32	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:15:15.941   06:25:32	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:15:15.941   06:25:32	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:15:15.941   06:25:32	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:15:15.941   06:25:32	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:15:15.941   06:25:32	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:15:15.941   06:25:32	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:15:15.941   06:25:32	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:15:15.941   06:25:32	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:15:15.941   06:25:32	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:15:15.941   06:25:32	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:15:15.941   06:25:32	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:15:15.941   06:25:32	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:15:16.199   06:25:32	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:15:16.199   06:25:32	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:15:16.199   06:25:32	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:15:16.199   06:25:32	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:15:16.199  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:15:16.199  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms
00:15:16.199  
00:15:16.199  --- 10.0.0.2 ping statistics ---
00:15:16.199  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:15:16.199  rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms
00:15:16.199   06:25:32	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:15:16.199  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:15:16.199  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms
00:15:16.199  
00:15:16.199  --- 10.0.0.3 ping statistics ---
00:15:16.199  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:15:16.199  rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms
00:15:16.199   06:25:32	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:15:16.199  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:15:16.199  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms
00:15:16.199  
00:15:16.199  --- 10.0.0.1 ping statistics ---
00:15:16.199  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:15:16.199  rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms
00:15:16.199   06:25:32	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:15:16.199   06:25:32	-- nvmf/common.sh@421 -- # return 0
00:15:16.199   06:25:32	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:15:16.199   06:25:32	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:15:16.199   06:25:32	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:15:16.199   06:25:32	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:15:16.199   06:25:32	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:15:16.200   06:25:32	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:15:16.200   06:25:32	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:15:16.200   06:25:32	-- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2
00:15:16.200   06:25:32	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:15:16.200   06:25:32	-- common/autotest_common.sh@722 -- # xtrace_disable
00:15:16.200   06:25:32	-- common/autotest_common.sh@10 -- # set +x
00:15:16.200   06:25:32	-- nvmf/common.sh@469 -- # nvmfpid=74393
00:15:16.200   06:25:32	-- nvmf/common.sh@470 -- # waitforlisten 74393
00:15:16.200   06:25:32	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:15:16.200   06:25:32	-- common/autotest_common.sh@829 -- # '[' -z 74393 ']'
00:15:16.200   06:25:32	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:16.200   06:25:32	-- common/autotest_common.sh@834 -- # local max_retries=100
00:15:16.200  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:16.200   06:25:32	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:16.200   06:25:32	-- common/autotest_common.sh@838 -- # xtrace_disable
00:15:16.200   06:25:32	-- common/autotest_common.sh@10 -- # set +x
00:15:16.200  [2024-12-16 06:25:33.044228] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:15:16.200  [2024-12-16 06:25:33.044315] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:15:16.458  [2024-12-16 06:25:33.180865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:16.458  [2024-12-16 06:25:33.268609] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:15:16.458  [2024-12-16 06:25:33.268761] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:15:16.458  [2024-12-16 06:25:33.268773] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:15:16.458  [2024-12-16 06:25:33.268782] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:15:16.458  [2024-12-16 06:25:33.268813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:15:17.394   06:25:34	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:15:17.394   06:25:34	-- common/autotest_common.sh@862 -- # return 0
00:15:17.394   06:25:34	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:15:17.394   06:25:34	-- common/autotest_common.sh@728 -- # xtrace_disable
00:15:17.394   06:25:34	-- common/autotest_common.sh@10 -- # set +x
00:15:17.394   06:25:34	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:15:17.394   06:25:34	-- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:15:17.394   06:25:34	-- common/autotest_common.sh@561 -- # xtrace_disable
00:15:17.394   06:25:34	-- common/autotest_common.sh@10 -- # set +x
00:15:17.394  [2024-12-16 06:25:34.087202] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:15:17.394   06:25:34	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:17.394   06:25:34	-- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:15:17.394   06:25:34	-- common/autotest_common.sh@561 -- # xtrace_disable
00:15:17.394   06:25:34	-- common/autotest_common.sh@10 -- # set +x
00:15:17.394  Malloc0
00:15:17.394   06:25:34	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:17.394   06:25:34	-- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:15:17.394   06:25:34	-- common/autotest_common.sh@561 -- # xtrace_disable
00:15:17.394   06:25:34	-- common/autotest_common.sh@10 -- # set +x
00:15:17.394   06:25:34	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:17.394   06:25:34	-- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:15:17.394   06:25:34	-- common/autotest_common.sh@561 -- # xtrace_disable
00:15:17.394   06:25:34	-- common/autotest_common.sh@10 -- # set +x
00:15:17.394   06:25:34	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:17.394   06:25:34	-- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:15:17.394   06:25:34	-- common/autotest_common.sh@561 -- # xtrace_disable
00:15:17.394   06:25:34	-- common/autotest_common.sh@10 -- # set +x
00:15:17.394  [2024-12-16 06:25:34.149809] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:15:17.394   06:25:34	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:17.394   06:25:34	-- target/queue_depth.sh@30 -- # bdevperf_pid=74443
00:15:17.394   06:25:34	-- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10
00:15:17.394   06:25:34	-- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:15:17.394   06:25:34	-- target/queue_depth.sh@33 -- # waitforlisten 74443 /var/tmp/bdevperf.sock
00:15:17.394   06:25:34	-- common/autotest_common.sh@829 -- # '[' -z 74443 ']'
00:15:17.394   06:25:34	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:15:17.394   06:25:34	-- common/autotest_common.sh@834 -- # local max_retries=100
00:15:17.394  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:15:17.394   06:25:34	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:15:17.394   06:25:34	-- common/autotest_common.sh@838 -- # xtrace_disable
00:15:17.394   06:25:34	-- common/autotest_common.sh@10 -- # set +x
00:15:17.394  [2024-12-16 06:25:34.208870] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:15:17.394  [2024-12-16 06:25:34.208946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74443 ]
00:15:17.394  [2024-12-16 06:25:34.340986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:17.653  [2024-12-16 06:25:34.437395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:15:18.220   06:25:35	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:15:18.220   06:25:35	-- common/autotest_common.sh@862 -- # return 0
00:15:18.220   06:25:35	-- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:15:18.220   06:25:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:15:18.220   06:25:35	-- common/autotest_common.sh@10 -- # set +x
00:15:18.478  NVMe0n1
00:15:18.478   06:25:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:18.478   06:25:35	-- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:15:18.478  Running I/O for 10 seconds...
00:15:28.451  
00:15:28.451                                                                                                  Latency(us)
00:15:28.451  
[2024-12-16T06:25:45.427Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:15:28.451  
[2024-12-16T06:25:45.427Z]  Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096)
00:15:28.451  	 Verification LBA range: start 0x0 length 0x4000
00:15:28.451  	 NVMe0n1             :      10.05   17131.83      66.92       0.00     0.00   59586.20   11498.59   57433.37
00:15:28.451  
[2024-12-16T06:25:45.427Z]  ===================================================================================================================
00:15:28.451  
[2024-12-16T06:25:45.427Z]  Total                       :              17131.83      66.92       0.00     0.00   59586.20   11498.59   57433.37
00:15:28.451  0
00:15:28.451   06:25:45	-- target/queue_depth.sh@39 -- # killprocess 74443
00:15:28.451   06:25:45	-- common/autotest_common.sh@936 -- # '[' -z 74443 ']'
00:15:28.451   06:25:45	-- common/autotest_common.sh@940 -- # kill -0 74443
00:15:28.451    06:25:45	-- common/autotest_common.sh@941 -- # uname
00:15:28.451   06:25:45	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:15:28.451    06:25:45	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74443
00:15:28.710   06:25:45	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:15:28.710   06:25:45	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:15:28.710  killing process with pid 74443
00:15:28.710   06:25:45	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 74443'
00:15:28.711  Received shutdown signal, test time was about 10.000000 seconds
00:15:28.711  
00:15:28.711                                                                                                  Latency(us)
00:15:28.711  
[2024-12-16T06:25:45.687Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:15:28.711  
[2024-12-16T06:25:45.687Z]  ===================================================================================================================
00:15:28.711  
[2024-12-16T06:25:45.687Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:15:28.711   06:25:45	-- common/autotest_common.sh@955 -- # kill 74443
00:15:28.711   06:25:45	-- common/autotest_common.sh@960 -- # wait 74443
00:15:28.711   06:25:45	-- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT
00:15:28.711   06:25:45	-- target/queue_depth.sh@43 -- # nvmftestfini
00:15:28.711   06:25:45	-- nvmf/common.sh@476 -- # nvmfcleanup
00:15:28.711   06:25:45	-- nvmf/common.sh@116 -- # sync
00:15:28.969   06:25:45	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:15:28.969   06:25:45	-- nvmf/common.sh@119 -- # set +e
00:15:28.969   06:25:45	-- nvmf/common.sh@120 -- # for i in {1..20}
00:15:28.969   06:25:45	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:15:28.969  rmmod nvme_tcp
00:15:28.969  rmmod nvme_fabrics
00:15:28.969  rmmod nvme_keyring
00:15:28.969   06:25:45	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:15:28.969   06:25:45	-- nvmf/common.sh@123 -- # set -e
00:15:28.969   06:25:45	-- nvmf/common.sh@124 -- # return 0
00:15:28.969   06:25:45	-- nvmf/common.sh@477 -- # '[' -n 74393 ']'
00:15:28.969   06:25:45	-- nvmf/common.sh@478 -- # killprocess 74393
00:15:28.969   06:25:45	-- common/autotest_common.sh@936 -- # '[' -z 74393 ']'
00:15:28.969   06:25:45	-- common/autotest_common.sh@940 -- # kill -0 74393
00:15:28.969    06:25:45	-- common/autotest_common.sh@941 -- # uname
00:15:28.969   06:25:45	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:15:28.969    06:25:45	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74393
00:15:28.969   06:25:45	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:15:28.969   06:25:45	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:15:28.969  killing process with pid 74393
00:15:28.969   06:25:45	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 74393'
00:15:28.969   06:25:45	-- common/autotest_common.sh@955 -- # kill 74393
00:15:28.969   06:25:45	-- common/autotest_common.sh@960 -- # wait 74393
00:15:29.228   06:25:46	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:15:29.228   06:25:46	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:15:29.228   06:25:46	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:15:29.228   06:25:46	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:15:29.228   06:25:46	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:15:29.228   06:25:46	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:15:29.228   06:25:46	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:15:29.228    06:25:46	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:15:29.228   06:25:46	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:15:29.228  
00:15:29.228  real	0m13.727s
00:15:29.228  user	0m22.722s
00:15:29.228  sys	0m2.589s
00:15:29.228   06:25:46	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:15:29.228  ************************************
00:15:29.228  END TEST nvmf_queue_depth
00:15:29.228   06:25:46	-- common/autotest_common.sh@10 -- # set +x
00:15:29.228  ************************************
00:15:29.487   06:25:46	-- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp
00:15:29.487   06:25:46	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:15:29.487   06:25:46	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:15:29.487   06:25:46	-- common/autotest_common.sh@10 -- # set +x
00:15:29.487  ************************************
00:15:29.487  START TEST nvmf_multipath
00:15:29.487  ************************************
00:15:29.487   06:25:46	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp
00:15:29.487  * Looking for test storage...
00:15:29.487  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:15:29.487    06:25:46	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:15:29.487     06:25:46	-- common/autotest_common.sh@1690 -- # lcov --version
00:15:29.487     06:25:46	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:15:29.487    06:25:46	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:15:29.487    06:25:46	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:15:29.487    06:25:46	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:15:29.487    06:25:46	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:15:29.488    06:25:46	-- scripts/common.sh@335 -- # IFS=.-:
00:15:29.488    06:25:46	-- scripts/common.sh@335 -- # read -ra ver1
00:15:29.488    06:25:46	-- scripts/common.sh@336 -- # IFS=.-:
00:15:29.488    06:25:46	-- scripts/common.sh@336 -- # read -ra ver2
00:15:29.488    06:25:46	-- scripts/common.sh@337 -- # local 'op=<'
00:15:29.488    06:25:46	-- scripts/common.sh@339 -- # ver1_l=2
00:15:29.488    06:25:46	-- scripts/common.sh@340 -- # ver2_l=1
00:15:29.488    06:25:46	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:15:29.488    06:25:46	-- scripts/common.sh@343 -- # case "$op" in
00:15:29.488    06:25:46	-- scripts/common.sh@344 -- # : 1
00:15:29.488    06:25:46	-- scripts/common.sh@363 -- # (( v = 0 ))
00:15:29.488    06:25:46	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:15:29.488     06:25:46	-- scripts/common.sh@364 -- # decimal 1
00:15:29.488     06:25:46	-- scripts/common.sh@352 -- # local d=1
00:15:29.488     06:25:46	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:29.488     06:25:46	-- scripts/common.sh@354 -- # echo 1
00:15:29.488    06:25:46	-- scripts/common.sh@364 -- # ver1[v]=1
00:15:29.488     06:25:46	-- scripts/common.sh@365 -- # decimal 2
00:15:29.488     06:25:46	-- scripts/common.sh@352 -- # local d=2
00:15:29.488     06:25:46	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:15:29.488     06:25:46	-- scripts/common.sh@354 -- # echo 2
00:15:29.488    06:25:46	-- scripts/common.sh@365 -- # ver2[v]=2
00:15:29.488    06:25:46	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:15:29.488    06:25:46	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:15:29.488    06:25:46	-- scripts/common.sh@367 -- # return 0
00:15:29.488    06:25:46	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:15:29.488    06:25:46	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:15:29.488  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:29.488  		--rc genhtml_branch_coverage=1
00:15:29.488  		--rc genhtml_function_coverage=1
00:15:29.488  		--rc genhtml_legend=1
00:15:29.488  		--rc geninfo_all_blocks=1
00:15:29.488  		--rc geninfo_unexecuted_blocks=1
00:15:29.488  		
00:15:29.488  		'
00:15:29.488    06:25:46	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:15:29.488  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:29.488  		--rc genhtml_branch_coverage=1
00:15:29.488  		--rc genhtml_function_coverage=1
00:15:29.488  		--rc genhtml_legend=1
00:15:29.488  		--rc geninfo_all_blocks=1
00:15:29.488  		--rc geninfo_unexecuted_blocks=1
00:15:29.488  		
00:15:29.488  		'
00:15:29.488    06:25:46	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:15:29.488  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:29.488  		--rc genhtml_branch_coverage=1
00:15:29.488  		--rc genhtml_function_coverage=1
00:15:29.488  		--rc genhtml_legend=1
00:15:29.488  		--rc geninfo_all_blocks=1
00:15:29.488  		--rc geninfo_unexecuted_blocks=1
00:15:29.488  		
00:15:29.488  		'
00:15:29.488    06:25:46	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:15:29.488  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:29.488  		--rc genhtml_branch_coverage=1
00:15:29.488  		--rc genhtml_function_coverage=1
00:15:29.488  		--rc genhtml_legend=1
00:15:29.488  		--rc geninfo_all_blocks=1
00:15:29.488  		--rc geninfo_unexecuted_blocks=1
00:15:29.488  		
00:15:29.488  		'
00:15:29.488   06:25:46	-- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:15:29.488     06:25:46	-- nvmf/common.sh@7 -- # uname -s
00:15:29.488    06:25:46	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:15:29.488    06:25:46	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:15:29.488    06:25:46	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:15:29.488    06:25:46	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:15:29.488    06:25:46	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:15:29.488    06:25:46	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:15:29.488    06:25:46	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:15:29.488    06:25:46	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:15:29.488    06:25:46	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:15:29.488     06:25:46	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:15:29.488    06:25:46	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:15:29.488    06:25:46	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:15:29.488    06:25:46	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:15:29.488    06:25:46	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:15:29.488    06:25:46	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:15:29.488    06:25:46	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:15:29.488     06:25:46	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:15:29.488     06:25:46	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:15:29.488     06:25:46	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:15:29.488      06:25:46	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:29.488      06:25:46	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:29.488      06:25:46	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:29.488      06:25:46	-- paths/export.sh@5 -- # export PATH
00:15:29.488      06:25:46	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:29.488    06:25:46	-- nvmf/common.sh@46 -- # : 0
00:15:29.488    06:25:46	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:15:29.488    06:25:46	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:15:29.488    06:25:46	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:15:29.488    06:25:46	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:15:29.488    06:25:46	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:15:29.488    06:25:46	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:15:29.488    06:25:46	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:15:29.488    06:25:46	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:15:29.488   06:25:46	-- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64
00:15:29.488   06:25:46	-- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:15:29.488   06:25:46	-- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1
00:15:29.488   06:25:46	-- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:15:29.488   06:25:46	-- target/multipath.sh@43 -- # nvmftestinit
00:15:29.488   06:25:46	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:15:29.488   06:25:46	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:15:29.488   06:25:46	-- nvmf/common.sh@436 -- # prepare_net_devs
00:15:29.488   06:25:46	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:15:29.488   06:25:46	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:15:29.488   06:25:46	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:15:29.488   06:25:46	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:15:29.488    06:25:46	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:15:29.488   06:25:46	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:15:29.488   06:25:46	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:15:29.488   06:25:46	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:15:29.488   06:25:46	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:15:29.488   06:25:46	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:15:29.488   06:25:46	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:15:29.488   06:25:46	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:15:29.488   06:25:46	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:15:29.488   06:25:46	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:15:29.488   06:25:46	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:15:29.488   06:25:46	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:15:29.488   06:25:46	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:15:29.488   06:25:46	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:15:29.488   06:25:46	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:15:29.488   06:25:46	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:15:29.488   06:25:46	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:15:29.488   06:25:46	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:15:29.488   06:25:46	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:15:29.488   06:25:46	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:15:29.488   06:25:46	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:15:29.747  Cannot find device "nvmf_tgt_br"
00:15:29.748   06:25:46	-- nvmf/common.sh@154 -- # true
00:15:29.748   06:25:46	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:15:29.748  Cannot find device "nvmf_tgt_br2"
00:15:29.748   06:25:46	-- nvmf/common.sh@155 -- # true
00:15:29.748   06:25:46	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:15:29.748   06:25:46	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:15:29.748  Cannot find device "nvmf_tgt_br"
00:15:29.748   06:25:46	-- nvmf/common.sh@157 -- # true
00:15:29.748   06:25:46	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:15:29.748  Cannot find device "nvmf_tgt_br2"
00:15:29.748   06:25:46	-- nvmf/common.sh@158 -- # true
00:15:29.748   06:25:46	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:15:29.748   06:25:46	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:15:29.748   06:25:46	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:15:29.748  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:15:29.748   06:25:46	-- nvmf/common.sh@161 -- # true
00:15:29.748   06:25:46	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:15:29.748  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:15:29.748   06:25:46	-- nvmf/common.sh@162 -- # true
00:15:29.748   06:25:46	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:15:29.748   06:25:46	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:15:29.748   06:25:46	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:15:29.748   06:25:46	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:15:29.748   06:25:46	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:15:29.748   06:25:46	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:15:29.748   06:25:46	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:15:29.748   06:25:46	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:15:29.748   06:25:46	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:15:29.748   06:25:46	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:15:29.748   06:25:46	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:15:29.748   06:25:46	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:15:29.748   06:25:46	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:15:29.748   06:25:46	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:15:29.748   06:25:46	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:15:29.748   06:25:46	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:15:29.748   06:25:46	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:15:29.748   06:25:46	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:15:29.748   06:25:46	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:15:29.748   06:25:46	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:15:30.006   06:25:46	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:15:30.006   06:25:46	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:15:30.006   06:25:46	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:15:30.006   06:25:46	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:15:30.006  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:15:30.006  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms
00:15:30.006  
00:15:30.006  --- 10.0.0.2 ping statistics ---
00:15:30.006  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:15:30.006  rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms
00:15:30.006   06:25:46	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:15:30.006  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:15:30.006  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms
00:15:30.006  
00:15:30.006  --- 10.0.0.3 ping statistics ---
00:15:30.006  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:15:30.006  rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms
00:15:30.006   06:25:46	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:15:30.006  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:15:30.006  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms
00:15:30.006  
00:15:30.006  --- 10.0.0.1 ping statistics ---
00:15:30.006  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:15:30.006  rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms
00:15:30.006   06:25:46	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:15:30.006   06:25:46	-- nvmf/common.sh@421 -- # return 0
00:15:30.006   06:25:46	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:15:30.006   06:25:46	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:15:30.006   06:25:46	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:15:30.006   06:25:46	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:15:30.006   06:25:46	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:15:30.006   06:25:46	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:15:30.006   06:25:46	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:15:30.006   06:25:46	-- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']'
00:15:30.006   06:25:46	-- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']'
00:15:30.006   06:25:46	-- target/multipath.sh@57 -- # nvmfappstart -m 0xF
00:15:30.006   06:25:46	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:15:30.006   06:25:46	-- common/autotest_common.sh@722 -- # xtrace_disable
00:15:30.006   06:25:46	-- common/autotest_common.sh@10 -- # set +x
00:15:30.006   06:25:46	-- nvmf/common.sh@469 -- # nvmfpid=74784
00:15:30.006   06:25:46	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:15:30.006   06:25:46	-- nvmf/common.sh@470 -- # waitforlisten 74784
00:15:30.006   06:25:46	-- common/autotest_common.sh@829 -- # '[' -z 74784 ']'
00:15:30.006   06:25:46	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:30.006   06:25:46	-- common/autotest_common.sh@834 -- # local max_retries=100
00:15:30.006   06:25:46	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:30.006  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:30.006   06:25:46	-- common/autotest_common.sh@838 -- # xtrace_disable
00:15:30.006   06:25:46	-- common/autotest_common.sh@10 -- # set +x
00:15:30.006  [2024-12-16 06:25:46.865712] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:15:30.006  [2024-12-16 06:25:46.865802] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:15:30.266  [2024-12-16 06:25:47.007237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:15:30.266  [2024-12-16 06:25:47.118515] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:15:30.266  [2024-12-16 06:25:47.118679] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:15:30.266  [2024-12-16 06:25:47.118696] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:15:30.266  [2024-12-16 06:25:47.118707] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:15:30.266  [2024-12-16 06:25:47.118880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:15:30.266  [2024-12-16 06:25:47.119348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:15:30.266  [2024-12-16 06:25:47.119454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:15:30.266  [2024-12-16 06:25:47.119471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:15:31.201   06:25:47	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:15:31.201   06:25:47	-- common/autotest_common.sh@862 -- # return 0
00:15:31.201   06:25:47	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:15:31.201   06:25:47	-- common/autotest_common.sh@728 -- # xtrace_disable
00:15:31.201   06:25:47	-- common/autotest_common.sh@10 -- # set +x
00:15:31.201   06:25:47	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:15:31.201   06:25:47	-- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:15:31.460  [2024-12-16 06:25:48.184750] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:15:31.460   06:25:48	-- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0
00:15:31.460  Malloc0
00:15:31.718   06:25:48	-- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r
00:15:31.718   06:25:48	-- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:15:31.977   06:25:48	-- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:15:32.236  [2024-12-16 06:25:49.055781] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:15:32.236   06:25:49	-- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:15:32.495  [2024-12-16 06:25:49.264043] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:15:32.495   06:25:49	-- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G
00:15:32.754   06:25:49	-- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G
00:15:32.754   06:25:49	-- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME
00:15:32.754   06:25:49	-- common/autotest_common.sh@1187 -- # local i=0
00:15:32.754   06:25:49	-- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0
00:15:32.754   06:25:49	-- common/autotest_common.sh@1189 -- # [[ -n '' ]]
00:15:32.754   06:25:49	-- common/autotest_common.sh@1194 -- # sleep 2
00:15:35.287   06:25:51	-- common/autotest_common.sh@1195 -- # (( i++ <= 15 ))
00:15:35.287    06:25:51	-- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL
00:15:35.287    06:25:51	-- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME
00:15:35.287   06:25:51	-- common/autotest_common.sh@1196 -- # nvme_devices=1
00:15:35.287   06:25:51	-- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter ))
00:15:35.287   06:25:51	-- common/autotest_common.sh@1197 -- # return 0
00:15:35.287    06:25:51	-- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME
00:15:35.287    06:25:51	-- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s
00:15:35.287    06:25:51	-- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/*
00:15:35.287    06:25:51	-- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]]
00:15:35.287    06:25:51	-- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]]
00:15:35.287    06:25:51	-- target/multipath.sh@38 -- # echo nvme-subsys0
00:15:35.287    06:25:51	-- target/multipath.sh@38 -- # return 0
00:15:35.287   06:25:51	-- target/multipath.sh@72 -- # subsystem=nvme-subsys0
00:15:35.287   06:25:51	-- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*)
00:15:35.287   06:25:51	-- target/multipath.sh@74 -- # paths=("${paths[@]##*/}")
00:15:35.287   06:25:51	-- target/multipath.sh@76 -- # (( 2 == 2 ))
00:15:35.287   06:25:51	-- target/multipath.sh@78 -- # p0=nvme0c0n1
00:15:35.287   06:25:51	-- target/multipath.sh@79 -- # p1=nvme0c1n1
00:15:35.287   06:25:51	-- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized
00:15:35.287   06:25:51	-- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized
00:15:35.287   06:25:51	-- target/multipath.sh@22 -- # local timeout=20
00:15:35.287   06:25:51	-- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state
00:15:35.287   06:25:51	-- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]]
00:15:35.287   06:25:51	-- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]]
00:15:35.287   06:25:51	-- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized
00:15:35.287   06:25:51	-- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized
00:15:35.287   06:25:51	-- target/multipath.sh@22 -- # local timeout=20
00:15:35.287   06:25:51	-- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state
00:15:35.287   06:25:51	-- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:15:35.287   06:25:51	-- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]]
00:15:35.287   06:25:51	-- target/multipath.sh@85 -- # echo numa
00:15:35.287   06:25:51	-- target/multipath.sh@88 -- # fio_pid=74922
00:15:35.287   06:25:51	-- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v
00:15:35.287   06:25:51	-- target/multipath.sh@90 -- # sleep 1
00:15:35.287  [global]
00:15:35.287  thread=1
00:15:35.287  invalidate=1
00:15:35.287  rw=randrw
00:15:35.287  time_based=1
00:15:35.287  runtime=6
00:15:35.287  ioengine=libaio
00:15:35.287  direct=1
00:15:35.287  bs=4096
00:15:35.287  iodepth=128
00:15:35.287  norandommap=0
00:15:35.287  numjobs=1
00:15:35.287  
00:15:35.287  verify_dump=1
00:15:35.287  verify_backlog=512
00:15:35.287  verify_state_save=0
00:15:35.287  do_verify=1
00:15:35.287  verify=crc32c-intel
00:15:35.287  [job0]
00:15:35.287  filename=/dev/nvme0n1
00:15:35.287  Could not set queue depth (nvme0n1)
00:15:35.287  job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:15:35.288  fio-3.35
00:15:35.288  Starting 1 thread
00:15:35.856   06:25:52	-- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible
00:15:36.114   06:25:53	-- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized
00:15:36.373   06:25:53	-- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible
00:15:36.373   06:25:53	-- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible
00:15:36.373   06:25:53	-- target/multipath.sh@22 -- # local timeout=20
00:15:36.373   06:25:53	-- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state
00:15:36.373   06:25:53	-- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]]
00:15:36.373   06:25:53	-- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]]
00:15:36.373   06:25:53	-- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized
00:15:36.373   06:25:53	-- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized
00:15:36.373   06:25:53	-- target/multipath.sh@22 -- # local timeout=20
00:15:36.373   06:25:53	-- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state
00:15:36.373   06:25:53	-- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:15:36.373   06:25:53	-- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]]
00:15:36.373   06:25:53	-- target/multipath.sh@25 -- # sleep 1s
00:15:37.310   06:25:54	-- target/multipath.sh@26 -- # (( timeout-- == 0 ))
00:15:37.310   06:25:54	-- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:15:37.310   06:25:54	-- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]]
00:15:37.310   06:25:54	-- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized
00:15:37.877   06:25:54	-- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible
00:15:38.136   06:25:54	-- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized
00:15:38.136   06:25:54	-- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized
00:15:38.136   06:25:54	-- target/multipath.sh@22 -- # local timeout=20
00:15:38.136   06:25:54	-- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state
00:15:38.136   06:25:54	-- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]]
00:15:38.136   06:25:54	-- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]]
00:15:38.136   06:25:54	-- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible
00:15:38.136   06:25:54	-- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible
00:15:38.136   06:25:54	-- target/multipath.sh@22 -- # local timeout=20
00:15:38.136   06:25:54	-- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state
00:15:38.136   06:25:54	-- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:15:38.136   06:25:54	-- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]]
00:15:38.136   06:25:54	-- target/multipath.sh@25 -- # sleep 1s
00:15:39.072   06:25:55	-- target/multipath.sh@26 -- # (( timeout-- == 0 ))
00:15:39.072   06:25:55	-- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:15:39.072   06:25:55	-- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]]
00:15:39.072   06:25:55	-- target/multipath.sh@104 -- # wait 74922
00:15:41.650  
00:15:41.650  job0: (groupid=0, jobs=1): err= 0: pid=74949: Mon Dec 16 06:25:58 2024
00:15:41.650    read: IOPS=13.3k, BW=52.0MiB/s (54.5MB/s)(312MiB/6002msec)
00:15:41.650      slat (nsec): min=1781, max=9359.6k, avg=43453.61, stdev=200847.32
00:15:41.650      clat (usec): min=1043, max=16393, avg=6590.29, stdev=1078.07
00:15:41.650       lat (usec): min=1908, max=16406, avg=6633.74, stdev=1084.63
00:15:41.650      clat percentiles (usec):
00:15:41.650       |  1.00th=[ 4015],  5.00th=[ 5080], 10.00th=[ 5473], 20.00th=[ 5735],
00:15:41.650       | 30.00th=[ 5997], 40.00th=[ 6259], 50.00th=[ 6521], 60.00th=[ 6783],
00:15:41.650       | 70.00th=[ 7046], 80.00th=[ 7308], 90.00th=[ 7767], 95.00th=[ 8455],
00:15:41.650       | 99.00th=[ 9765], 99.50th=[10028], 99.90th=[12125], 99.95th=[15401],
00:15:41.650       | 99.99th=[16319]
00:15:41.650     bw (  KiB/s): min=10032, max=35312, per=52.12%, avg=27750.64, stdev=7941.60, samples=11
00:15:41.650     iops        : min= 2508, max= 8828, avg=6937.64, stdev=1985.40, samples=11
00:15:41.650    write: IOPS=7804, BW=30.5MiB/s (32.0MB/s)(159MiB/5213msec); 0 zone resets
00:15:41.650      slat (usec): min=2, max=4231, avg=53.76, stdev=142.73
00:15:41.650      clat (usec): min=1720, max=15679, avg=5772.33, stdev=925.45
00:15:41.650       lat (usec): min=1767, max=15712, avg=5826.10, stdev=927.67
00:15:41.650      clat percentiles (usec):
00:15:41.650       |  1.00th=[ 3326],  5.00th=[ 4228], 10.00th=[ 4883], 20.00th=[ 5211],
00:15:41.650       | 30.00th=[ 5473], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 5932],
00:15:41.650       | 70.00th=[ 6128], 80.00th=[ 6325], 90.00th=[ 6587], 95.00th=[ 6849],
00:15:41.650       | 99.00th=[ 8586], 99.50th=[ 9372], 99.90th=[14615], 99.95th=[15401],
00:15:41.650       | 99.99th=[15533]
00:15:41.650     bw (  KiB/s): min=10296, max=34792, per=88.80%, avg=27721.45, stdev=7722.01, samples=11
00:15:41.650     iops        : min= 2574, max= 8698, avg=6930.36, stdev=1930.50, samples=11
00:15:41.650    lat (msec)   : 2=0.02%, 4=1.98%, 10=97.49%, 20=0.50%
00:15:41.650    cpu          : usr=5.30%, sys=22.01%, ctx=7312, majf=0, minf=127
00:15:41.650    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6%
00:15:41.650       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:15:41.650       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:15:41.650       issued rwts: total=79883,40684,0,0 short=0,0,0,0 dropped=0,0,0,0
00:15:41.650       latency   : target=0, window=0, percentile=100.00%, depth=128
00:15:41.650  
00:15:41.650  Run status group 0 (all jobs):
00:15:41.650     READ: bw=52.0MiB/s (54.5MB/s), 52.0MiB/s-52.0MiB/s (54.5MB/s-54.5MB/s), io=312MiB (327MB), run=6002-6002msec
00:15:41.650    WRITE: bw=30.5MiB/s (32.0MB/s), 30.5MiB/s-30.5MiB/s (32.0MB/s-32.0MB/s), io=159MiB (167MB), run=5213-5213msec
00:15:41.650  
00:15:41.650  Disk stats (read/write):
00:15:41.650    nvme0n1: ios=78884/39949, merge=0/0, ticks=483703/214116, in_queue=697819, util=98.58%
00:15:41.650   06:25:58	-- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized
00:15:41.650   06:25:58	-- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized
00:15:41.650   06:25:58	-- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized
00:15:41.650   06:25:58	-- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized
00:15:41.650   06:25:58	-- target/multipath.sh@22 -- # local timeout=20
00:15:41.650   06:25:58	-- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state
00:15:41.650   06:25:58	-- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]]
00:15:41.650   06:25:58	-- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]]
00:15:41.650   06:25:58	-- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized
00:15:41.650   06:25:58	-- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized
00:15:41.650   06:25:58	-- target/multipath.sh@22 -- # local timeout=20
00:15:41.650   06:25:58	-- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state
00:15:41.650   06:25:58	-- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:15:41.650   06:25:58	-- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]]
00:15:41.650   06:25:58	-- target/multipath.sh@25 -- # sleep 1s
00:15:43.026   06:25:59	-- target/multipath.sh@26 -- # (( timeout-- == 0 ))
00:15:43.027   06:25:59	-- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:15:43.027   06:25:59	-- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]]
00:15:43.027   06:25:59	-- target/multipath.sh@113 -- # echo round-robin
00:15:43.027   06:25:59	-- target/multipath.sh@116 -- # fio_pid=75077
00:15:43.027   06:25:59	-- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v
00:15:43.027   06:25:59	-- target/multipath.sh@118 -- # sleep 1
00:15:43.027  [global]
00:15:43.027  thread=1
00:15:43.027  invalidate=1
00:15:43.027  rw=randrw
00:15:43.027  time_based=1
00:15:43.027  runtime=6
00:15:43.027  ioengine=libaio
00:15:43.027  direct=1
00:15:43.027  bs=4096
00:15:43.027  iodepth=128
00:15:43.027  norandommap=0
00:15:43.027  numjobs=1
00:15:43.027  
00:15:43.027  verify_dump=1
00:15:43.027  verify_backlog=512
00:15:43.027  verify_state_save=0
00:15:43.027  do_verify=1
00:15:43.027  verify=crc32c-intel
00:15:43.027  [job0]
00:15:43.027  filename=/dev/nvme0n1
00:15:43.027  Could not set queue depth (nvme0n1)
00:15:43.027  job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:15:43.027  fio-3.35
00:15:43.027  Starting 1 thread
00:15:43.963   06:26:00	-- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible
00:15:43.963   06:26:00	-- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized
00:15:44.222   06:26:01	-- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible
00:15:44.222   06:26:01	-- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible
00:15:44.222   06:26:01	-- target/multipath.sh@22 -- # local timeout=20
00:15:44.222   06:26:01	-- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state
00:15:44.222   06:26:01	-- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]]
00:15:44.222   06:26:01	-- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]]
00:15:44.222   06:26:01	-- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized
00:15:44.222   06:26:01	-- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized
00:15:44.222   06:26:01	-- target/multipath.sh@22 -- # local timeout=20
00:15:44.222   06:26:01	-- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state
00:15:44.222   06:26:01	-- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:15:44.222   06:26:01	-- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]]
00:15:44.222   06:26:01	-- target/multipath.sh@25 -- # sleep 1s
00:15:45.158   06:26:02	-- target/multipath.sh@26 -- # (( timeout-- == 0 ))
00:15:45.158   06:26:02	-- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:15:45.158   06:26:02	-- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]]
00:15:45.158   06:26:02	-- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized
00:15:45.416   06:26:02	-- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible
00:15:45.675   06:26:02	-- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized
00:15:45.675   06:26:02	-- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized
00:15:45.675   06:26:02	-- target/multipath.sh@22 -- # local timeout=20
00:15:45.675   06:26:02	-- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state
00:15:45.675   06:26:02	-- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]]
00:15:45.675   06:26:02	-- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]]
00:15:45.675   06:26:02	-- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible
00:15:45.675   06:26:02	-- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible
00:15:45.675   06:26:02	-- target/multipath.sh@22 -- # local timeout=20
00:15:45.675   06:26:02	-- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state
00:15:45.675   06:26:02	-- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:15:45.675   06:26:02	-- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]]
00:15:45.676   06:26:02	-- target/multipath.sh@25 -- # sleep 1s
00:15:46.612   06:26:03	-- target/multipath.sh@26 -- # (( timeout-- == 0 ))
00:15:46.612   06:26:03	-- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:15:46.612   06:26:03	-- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]]
00:15:46.612   06:26:03	-- target/multipath.sh@132 -- # wait 75077
00:15:49.146  
00:15:49.146  job0: (groupid=0, jobs=1): err= 0: pid=75099: Mon Dec 16 06:26:05 2024
00:15:49.146    read: IOPS=13.7k, BW=53.7MiB/s (56.3MB/s)(324MiB/6044msec)
00:15:49.146      slat (nsec): min=1753, max=4041.8k, avg=36650.15, stdev=171795.72
00:15:49.146      clat (usec): min=760, max=50441, avg=6434.55, stdev=2146.25
00:15:49.146       lat (usec): min=776, max=50448, avg=6471.20, stdev=2147.96
00:15:49.146      clat percentiles (usec):
00:15:49.146       |  1.00th=[ 2737],  5.00th=[ 4228], 10.00th=[ 4948], 20.00th=[ 5407],
00:15:49.146       | 30.00th=[ 5669], 40.00th=[ 5932], 50.00th=[ 6194], 60.00th=[ 6521],
00:15:49.146       | 70.00th=[ 6783], 80.00th=[ 7177], 90.00th=[ 8160], 95.00th=[ 9241],
00:15:49.146       | 99.00th=[11469], 99.50th=[12518], 99.90th=[44827], 99.95th=[45351],
00:15:49.146       | 99.99th=[46924]
00:15:49.146     bw (  KiB/s): min=15536, max=37800, per=51.40%, avg=28260.00, stdev=7399.56, samples=12
00:15:49.146     iops        : min= 3884, max= 9450, avg=7065.00, stdev=1849.89, samples=12
00:15:49.146    write: IOPS=8068, BW=31.5MiB/s (33.0MB/s)(166MiB/5266msec); 0 zone resets
00:15:49.146      slat (usec): min=2, max=2504, avg=46.30, stdev=115.79
00:15:49.146      clat (usec): min=546, max=14413, avg=5476.77, stdev=1211.81
00:15:49.146       lat (usec): min=572, max=14442, avg=5523.07, stdev=1213.50
00:15:49.146      clat percentiles (usec):
00:15:49.146       |  1.00th=[ 2606],  5.00th=[ 3261], 10.00th=[ 3851], 20.00th=[ 4752],
00:15:49.146       | 30.00th=[ 5080], 40.00th=[ 5342], 50.00th=[ 5473], 60.00th=[ 5669],
00:15:49.146       | 70.00th=[ 5866], 80.00th=[ 6128], 90.00th=[ 6718], 95.00th=[ 7635],
00:15:49.146       | 99.00th=[ 8979], 99.50th=[ 9634], 99.90th=[11338], 99.95th=[11863],
00:15:49.146       | 99.99th=[14222]
00:15:49.146     bw (  KiB/s): min=16384, max=36864, per=87.72%, avg=28309.58, stdev=7062.10, samples=12
00:15:49.146     iops        : min= 4096, max= 9216, avg=7077.33, stdev=1765.51, samples=12
00:15:49.146    lat (usec)   : 750=0.01%, 1000=0.02%
00:15:49.146    lat (msec)   : 2=0.24%, 4=6.27%, 10=91.44%, 20=1.92%, 50=0.10%
00:15:49.146    lat (msec)   : 100=0.01%
00:15:49.146    cpu          : usr=5.76%, sys=23.13%, ctx=7855, majf=0, minf=127
00:15:49.146    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6%
00:15:49.146       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:15:49.146       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:15:49.146       issued rwts: total=83068,42487,0,0 short=0,0,0,0 dropped=0,0,0,0
00:15:49.146       latency   : target=0, window=0, percentile=100.00%, depth=128
00:15:49.146  
00:15:49.146  Run status group 0 (all jobs):
00:15:49.146     READ: bw=53.7MiB/s (56.3MB/s), 53.7MiB/s-53.7MiB/s (56.3MB/s-56.3MB/s), io=324MiB (340MB), run=6044-6044msec
00:15:49.146    WRITE: bw=31.5MiB/s (33.0MB/s), 31.5MiB/s-31.5MiB/s (33.0MB/s-33.0MB/s), io=166MiB (174MB), run=5266-5266msec
00:15:49.146  
00:15:49.146  Disk stats (read/write):
00:15:49.146    nvme0n1: ios=82082/41463, merge=0/0, ticks=489174/212144, in_queue=701318, util=98.68%
00:15:49.146   06:26:05	-- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:15:49.146  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s)
00:15:49.146   06:26:06	-- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:15:49.146   06:26:06	-- common/autotest_common.sh@1208 -- # local i=0
00:15:49.146   06:26:06	-- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL
00:15:49.146   06:26:06	-- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME
00:15:49.146   06:26:06	-- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME
00:15:49.146   06:26:06	-- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL
00:15:49.146   06:26:06	-- common/autotest_common.sh@1220 -- # return 0
00:15:49.146   06:26:06	-- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:15:49.405   06:26:06	-- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state
00:15:49.405   06:26:06	-- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state
00:15:49.405   06:26:06	-- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT
00:15:49.405   06:26:06	-- target/multipath.sh@144 -- # nvmftestfini
00:15:49.405   06:26:06	-- nvmf/common.sh@476 -- # nvmfcleanup
00:15:49.405   06:26:06	-- nvmf/common.sh@116 -- # sync
00:15:49.664   06:26:06	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:15:49.664   06:26:06	-- nvmf/common.sh@119 -- # set +e
00:15:49.664   06:26:06	-- nvmf/common.sh@120 -- # for i in {1..20}
00:15:49.664   06:26:06	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:15:49.664  rmmod nvme_tcp
00:15:49.664  rmmod nvme_fabrics
00:15:49.664  rmmod nvme_keyring
00:15:49.664   06:26:06	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:15:49.664   06:26:06	-- nvmf/common.sh@123 -- # set -e
00:15:49.664   06:26:06	-- nvmf/common.sh@124 -- # return 0
00:15:49.664   06:26:06	-- nvmf/common.sh@477 -- # '[' -n 74784 ']'
00:15:49.664   06:26:06	-- nvmf/common.sh@478 -- # killprocess 74784
00:15:49.664   06:26:06	-- common/autotest_common.sh@936 -- # '[' -z 74784 ']'
00:15:49.664   06:26:06	-- common/autotest_common.sh@940 -- # kill -0 74784
00:15:49.664    06:26:06	-- common/autotest_common.sh@941 -- # uname
00:15:49.664   06:26:06	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:15:49.664    06:26:06	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74784
00:15:49.664   06:26:06	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:15:49.664   06:26:06	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:15:49.664  killing process with pid 74784
00:15:49.664   06:26:06	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 74784'
00:15:49.664   06:26:06	-- common/autotest_common.sh@955 -- # kill 74784
00:15:49.664   06:26:06	-- common/autotest_common.sh@960 -- # wait 74784
00:15:49.923   06:26:06	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:15:49.923   06:26:06	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:15:49.923   06:26:06	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:15:49.923   06:26:06	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:15:49.923   06:26:06	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:15:49.923   06:26:06	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:15:49.923   06:26:06	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:15:49.923    06:26:06	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:15:49.923   06:26:06	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:15:49.923  ************************************
00:15:49.923  END TEST nvmf_multipath
00:15:49.923  ************************************
00:15:49.923  
00:15:49.923  real	0m20.625s
00:15:49.923  user	1m20.188s
00:15:49.923  sys	0m6.177s
00:15:49.923   06:26:06	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:15:49.923   06:26:06	-- common/autotest_common.sh@10 -- # set +x
00:15:50.182   06:26:06	-- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp
00:15:50.182   06:26:06	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:15:50.182   06:26:06	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:15:50.182   06:26:06	-- common/autotest_common.sh@10 -- # set +x
00:15:50.182  ************************************
00:15:50.182  START TEST nvmf_zcopy
00:15:50.182  ************************************
00:15:50.182   06:26:06	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp
00:15:50.182  * Looking for test storage...
00:15:50.182  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:15:50.182    06:26:06	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:15:50.182     06:26:06	-- common/autotest_common.sh@1690 -- # lcov --version
00:15:50.182     06:26:06	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:15:50.182    06:26:07	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:15:50.182    06:26:07	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:15:50.182    06:26:07	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:15:50.182    06:26:07	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:15:50.182    06:26:07	-- scripts/common.sh@335 -- # IFS=.-:
00:15:50.182    06:26:07	-- scripts/common.sh@335 -- # read -ra ver1
00:15:50.182    06:26:07	-- scripts/common.sh@336 -- # IFS=.-:
00:15:50.182    06:26:07	-- scripts/common.sh@336 -- # read -ra ver2
00:15:50.182    06:26:07	-- scripts/common.sh@337 -- # local 'op=<'
00:15:50.182    06:26:07	-- scripts/common.sh@339 -- # ver1_l=2
00:15:50.182    06:26:07	-- scripts/common.sh@340 -- # ver2_l=1
00:15:50.182    06:26:07	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:15:50.182    06:26:07	-- scripts/common.sh@343 -- # case "$op" in
00:15:50.182    06:26:07	-- scripts/common.sh@344 -- # : 1
00:15:50.182    06:26:07	-- scripts/common.sh@363 -- # (( v = 0 ))
00:15:50.182    06:26:07	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:15:50.182     06:26:07	-- scripts/common.sh@364 -- # decimal 1
00:15:50.182     06:26:07	-- scripts/common.sh@352 -- # local d=1
00:15:50.182     06:26:07	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:50.182     06:26:07	-- scripts/common.sh@354 -- # echo 1
00:15:50.182    06:26:07	-- scripts/common.sh@364 -- # ver1[v]=1
00:15:50.182     06:26:07	-- scripts/common.sh@365 -- # decimal 2
00:15:50.182     06:26:07	-- scripts/common.sh@352 -- # local d=2
00:15:50.182     06:26:07	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:15:50.182     06:26:07	-- scripts/common.sh@354 -- # echo 2
00:15:50.182    06:26:07	-- scripts/common.sh@365 -- # ver2[v]=2
00:15:50.183    06:26:07	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:15:50.183    06:26:07	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:15:50.183    06:26:07	-- scripts/common.sh@367 -- # return 0
00:15:50.183    06:26:07	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:15:50.183    06:26:07	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:15:50.183  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:50.183  		--rc genhtml_branch_coverage=1
00:15:50.183  		--rc genhtml_function_coverage=1
00:15:50.183  		--rc genhtml_legend=1
00:15:50.183  		--rc geninfo_all_blocks=1
00:15:50.183  		--rc geninfo_unexecuted_blocks=1
00:15:50.183  		
00:15:50.183  		'
00:15:50.183    06:26:07	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:15:50.183  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:50.183  		--rc genhtml_branch_coverage=1
00:15:50.183  		--rc genhtml_function_coverage=1
00:15:50.183  		--rc genhtml_legend=1
00:15:50.183  		--rc geninfo_all_blocks=1
00:15:50.183  		--rc geninfo_unexecuted_blocks=1
00:15:50.183  		
00:15:50.183  		'
00:15:50.183    06:26:07	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:15:50.183  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:50.183  		--rc genhtml_branch_coverage=1
00:15:50.183  		--rc genhtml_function_coverage=1
00:15:50.183  		--rc genhtml_legend=1
00:15:50.183  		--rc geninfo_all_blocks=1
00:15:50.183  		--rc geninfo_unexecuted_blocks=1
00:15:50.183  		
00:15:50.183  		'
00:15:50.183    06:26:07	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:15:50.183  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:50.183  		--rc genhtml_branch_coverage=1
00:15:50.183  		--rc genhtml_function_coverage=1
00:15:50.183  		--rc genhtml_legend=1
00:15:50.183  		--rc geninfo_all_blocks=1
00:15:50.183  		--rc geninfo_unexecuted_blocks=1
00:15:50.183  		
00:15:50.183  		'
00:15:50.183   06:26:07	-- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:15:50.183     06:26:07	-- nvmf/common.sh@7 -- # uname -s
00:15:50.183    06:26:07	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:15:50.183    06:26:07	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:15:50.183    06:26:07	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:15:50.183    06:26:07	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:15:50.183    06:26:07	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:15:50.183    06:26:07	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:15:50.183    06:26:07	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:15:50.183    06:26:07	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:15:50.183    06:26:07	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:15:50.183     06:26:07	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:15:50.183    06:26:07	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:15:50.183    06:26:07	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:15:50.183    06:26:07	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:15:50.183    06:26:07	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:15:50.183    06:26:07	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:15:50.183    06:26:07	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:15:50.183     06:26:07	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:15:50.183     06:26:07	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:15:50.183     06:26:07	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:15:50.183      06:26:07	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:50.183      06:26:07	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:50.183      06:26:07	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:50.183      06:26:07	-- paths/export.sh@5 -- # export PATH
00:15:50.183      06:26:07	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:50.183    06:26:07	-- nvmf/common.sh@46 -- # : 0
00:15:50.183    06:26:07	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:15:50.183    06:26:07	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:15:50.183    06:26:07	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:15:50.183    06:26:07	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:15:50.183    06:26:07	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:15:50.183    06:26:07	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:15:50.183    06:26:07	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:15:50.183    06:26:07	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:15:50.183   06:26:07	-- target/zcopy.sh@12 -- # nvmftestinit
00:15:50.183   06:26:07	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:15:50.183   06:26:07	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:15:50.183   06:26:07	-- nvmf/common.sh@436 -- # prepare_net_devs
00:15:50.183   06:26:07	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:15:50.183   06:26:07	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:15:50.183   06:26:07	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:15:50.183   06:26:07	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:15:50.183    06:26:07	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:15:50.183   06:26:07	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:15:50.183   06:26:07	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:15:50.183   06:26:07	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:15:50.183   06:26:07	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:15:50.183   06:26:07	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:15:50.183   06:26:07	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:15:50.183   06:26:07	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:15:50.183   06:26:07	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:15:50.183   06:26:07	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:15:50.183   06:26:07	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:15:50.183   06:26:07	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:15:50.183   06:26:07	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:15:50.183   06:26:07	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:15:50.183   06:26:07	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:15:50.183   06:26:07	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:15:50.183   06:26:07	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:15:50.183   06:26:07	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:15:50.183   06:26:07	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:15:50.183   06:26:07	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:15:50.183   06:26:07	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:15:50.183  Cannot find device "nvmf_tgt_br"
00:15:50.183   06:26:07	-- nvmf/common.sh@154 -- # true
00:15:50.183   06:26:07	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:15:50.183  Cannot find device "nvmf_tgt_br2"
00:15:50.183   06:26:07	-- nvmf/common.sh@155 -- # true
00:15:50.183   06:26:07	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:15:50.183   06:26:07	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:15:50.183  Cannot find device "nvmf_tgt_br"
00:15:50.183   06:26:07	-- nvmf/common.sh@157 -- # true
00:15:50.183   06:26:07	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:15:50.442  Cannot find device "nvmf_tgt_br2"
00:15:50.442   06:26:07	-- nvmf/common.sh@158 -- # true
00:15:50.442   06:26:07	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:15:50.442   06:26:07	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:15:50.442   06:26:07	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:15:50.442  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:15:50.442   06:26:07	-- nvmf/common.sh@161 -- # true
00:15:50.442   06:26:07	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:15:50.442  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:15:50.442   06:26:07	-- nvmf/common.sh@162 -- # true
00:15:50.442   06:26:07	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:15:50.442   06:26:07	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:15:50.442   06:26:07	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:15:50.442   06:26:07	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:15:50.442   06:26:07	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:15:50.442   06:26:07	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:15:50.442   06:26:07	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:15:50.442   06:26:07	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:15:50.442   06:26:07	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:15:50.442   06:26:07	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:15:50.442   06:26:07	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:15:50.442   06:26:07	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:15:50.442   06:26:07	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:15:50.442   06:26:07	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:15:50.442   06:26:07	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:15:50.442   06:26:07	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:15:50.442   06:26:07	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:15:50.442   06:26:07	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:15:50.442   06:26:07	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:15:50.442   06:26:07	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:15:50.442   06:26:07	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:15:50.442   06:26:07	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:15:50.442   06:26:07	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:15:50.442   06:26:07	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:15:50.442  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:15:50.442  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms
00:15:50.442  
00:15:50.442  --- 10.0.0.2 ping statistics ---
00:15:50.442  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:15:50.442  rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms
00:15:50.442   06:26:07	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:15:50.442  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:15:50.442  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms
00:15:50.442  
00:15:50.442  --- 10.0.0.3 ping statistics ---
00:15:50.442  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:15:50.442  rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms
00:15:50.442   06:26:07	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:15:50.442  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:15:50.442  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms
00:15:50.442  
00:15:50.442  --- 10.0.0.1 ping statistics ---
00:15:50.442  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:15:50.442  rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms
00:15:50.442   06:26:07	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:15:50.442   06:26:07	-- nvmf/common.sh@421 -- # return 0
00:15:50.442   06:26:07	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:15:50.442   06:26:07	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:15:50.442   06:26:07	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:15:50.442   06:26:07	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:15:50.442   06:26:07	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:15:50.442   06:26:07	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:15:50.443   06:26:07	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:15:50.701   06:26:07	-- target/zcopy.sh@13 -- # nvmfappstart -m 0x2
00:15:50.701   06:26:07	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:15:50.701   06:26:07	-- common/autotest_common.sh@722 -- # xtrace_disable
00:15:50.701   06:26:07	-- common/autotest_common.sh@10 -- # set +x
00:15:50.701   06:26:07	-- nvmf/common.sh@469 -- # nvmfpid=75384
00:15:50.701   06:26:07	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:15:50.701   06:26:07	-- nvmf/common.sh@470 -- # waitforlisten 75384
00:15:50.701   06:26:07	-- common/autotest_common.sh@829 -- # '[' -z 75384 ']'
00:15:50.701   06:26:07	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:50.701   06:26:07	-- common/autotest_common.sh@834 -- # local max_retries=100
00:15:50.701   06:26:07	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:50.701  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:50.701   06:26:07	-- common/autotest_common.sh@838 -- # xtrace_disable
00:15:50.701   06:26:07	-- common/autotest_common.sh@10 -- # set +x
00:15:50.701  [2024-12-16 06:26:07.491849] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:15:50.702  [2024-12-16 06:26:07.491935] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:15:50.702  [2024-12-16 06:26:07.630724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:50.960  [2024-12-16 06:26:07.731785] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:15:50.960  [2024-12-16 06:26:07.731952] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:15:50.960  [2024-12-16 06:26:07.731968] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:15:50.960  [2024-12-16 06:26:07.731980] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:15:50.960  [2024-12-16 06:26:07.732018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:15:51.528   06:26:08	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:15:51.528   06:26:08	-- common/autotest_common.sh@862 -- # return 0
00:15:51.528   06:26:08	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:15:51.528   06:26:08	-- common/autotest_common.sh@728 -- # xtrace_disable
00:15:51.528   06:26:08	-- common/autotest_common.sh@10 -- # set +x
00:15:51.528   06:26:08	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:15:51.528   06:26:08	-- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']'
00:15:51.528   06:26:08	-- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy
00:15:51.528   06:26:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:15:51.528   06:26:08	-- common/autotest_common.sh@10 -- # set +x
00:15:51.528  [2024-12-16 06:26:08.461394] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:15:51.528   06:26:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:51.528   06:26:08	-- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:15:51.528   06:26:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:15:51.528   06:26:08	-- common/autotest_common.sh@10 -- # set +x
00:15:51.528   06:26:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:51.528   06:26:08	-- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:15:51.528   06:26:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:15:51.528   06:26:08	-- common/autotest_common.sh@10 -- # set +x
00:15:51.528  [2024-12-16 06:26:08.477472] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:15:51.528   06:26:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:51.528   06:26:08	-- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:15:51.528   06:26:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:15:51.528   06:26:08	-- common/autotest_common.sh@10 -- # set +x
00:15:51.528   06:26:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:51.528   06:26:08	-- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0
00:15:51.528   06:26:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:15:51.528   06:26:08	-- common/autotest_common.sh@10 -- # set +x
00:15:51.788  malloc0
00:15:51.788   06:26:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:51.788   06:26:08	-- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1
00:15:51.788   06:26:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:15:51.788   06:26:08	-- common/autotest_common.sh@10 -- # set +x
00:15:51.788   06:26:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:51.788   06:26:08	-- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192
00:15:51.788    06:26:08	-- target/zcopy.sh@33 -- # gen_nvmf_target_json
00:15:51.788    06:26:08	-- nvmf/common.sh@520 -- # config=()
00:15:51.788    06:26:08	-- nvmf/common.sh@520 -- # local subsystem config
00:15:51.788    06:26:08	-- nvmf/common.sh@522 -- # for subsystem in "${@:-1}"
00:15:51.788    06:26:08	-- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF
00:15:51.788  {
00:15:51.788    "params": {
00:15:51.788      "name": "Nvme$subsystem",
00:15:51.788      "trtype": "$TEST_TRANSPORT",
00:15:51.788      "traddr": "$NVMF_FIRST_TARGET_IP",
00:15:51.788      "adrfam": "ipv4",
00:15:51.788      "trsvcid": "$NVMF_PORT",
00:15:51.788      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:15:51.788      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:15:51.788      "hdgst": ${hdgst:-false},
00:15:51.788      "ddgst": ${ddgst:-false}
00:15:51.788    },
00:15:51.788    "method": "bdev_nvme_attach_controller"
00:15:51.788  }
00:15:51.788  EOF
00:15:51.788  )")
00:15:51.788     06:26:08	-- nvmf/common.sh@542 -- # cat
00:15:51.788    06:26:08	-- nvmf/common.sh@544 -- # jq .
00:15:51.788     06:26:08	-- nvmf/common.sh@545 -- # IFS=,
00:15:51.788     06:26:08	-- nvmf/common.sh@546 -- # printf '%s\n' '{
00:15:51.788    "params": {
00:15:51.788      "name": "Nvme1",
00:15:51.788      "trtype": "tcp",
00:15:51.788      "traddr": "10.0.0.2",
00:15:51.788      "adrfam": "ipv4",
00:15:51.788      "trsvcid": "4420",
00:15:51.788      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:15:51.788      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:15:51.788      "hdgst": false,
00:15:51.788      "ddgst": false
00:15:51.788    },
00:15:51.788    "method": "bdev_nvme_attach_controller"
00:15:51.788  }'
00:15:51.788  [2024-12-16 06:26:08.570677] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:15:51.788  [2024-12-16 06:26:08.570768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75435 ]
00:15:51.788  [2024-12-16 06:26:08.711603] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:52.047  [2024-12-16 06:26:08.822714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:15:52.305  Running I/O for 10 seconds...
00:16:02.287  
00:16:02.287                                                                                                  Latency(us)
00:16:02.287  
[2024-12-16T06:26:19.263Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:16:02.287  
[2024-12-16T06:26:19.263Z]  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192)
00:16:02.287  	 Verification LBA range: start 0x0 length 0x1000
00:16:02.287  	 Nvme1n1             :      10.01   11239.59      87.81       0.00     0.00   11360.67    1169.22   20375.74
00:16:02.287  
[2024-12-16T06:26:19.263Z]  ===================================================================================================================
00:16:02.287  
[2024-12-16T06:26:19.263Z]  Total                       :              11239.59      87.81       0.00     0.00   11360.67    1169.22   20375.74
00:16:02.547   06:26:19	-- target/zcopy.sh@39 -- # perfpid=75552
00:16:02.547   06:26:19	-- target/zcopy.sh@41 -- # xtrace_disable
00:16:02.547   06:26:19	-- common/autotest_common.sh@10 -- # set +x
00:16:02.547    06:26:19	-- target/zcopy.sh@37 -- # gen_nvmf_target_json
00:16:02.547   06:26:19	-- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192
00:16:02.547    06:26:19	-- nvmf/common.sh@520 -- # config=()
00:16:02.547    06:26:19	-- nvmf/common.sh@520 -- # local subsystem config
00:16:02.547    06:26:19	-- nvmf/common.sh@522 -- # for subsystem in "${@:-1}"
00:16:02.547    06:26:19	-- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF
00:16:02.547  {
00:16:02.547    "params": {
00:16:02.547      "name": "Nvme$subsystem",
00:16:02.547      "trtype": "$TEST_TRANSPORT",
00:16:02.547      "traddr": "$NVMF_FIRST_TARGET_IP",
00:16:02.547      "adrfam": "ipv4",
00:16:02.547      "trsvcid": "$NVMF_PORT",
00:16:02.547      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:16:02.547      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:16:02.547      "hdgst": ${hdgst:-false},
00:16:02.547      "ddgst": ${ddgst:-false}
00:16:02.547    },
00:16:02.547    "method": "bdev_nvme_attach_controller"
00:16:02.547  }
00:16:02.547  EOF
00:16:02.547  )")
00:16:02.547     06:26:19	-- nvmf/common.sh@542 -- # cat
00:16:02.547  [2024-12-16 06:26:19.346350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.547  [2024-12-16 06:26:19.346402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.547    06:26:19	-- nvmf/common.sh@544 -- # jq .
00:16:02.547     06:26:19	-- nvmf/common.sh@545 -- # IFS=,
00:16:02.547     06:26:19	-- nvmf/common.sh@546 -- # printf '%s\n' '{
00:16:02.547    "params": {
00:16:02.547      "name": "Nvme1",
00:16:02.547      "trtype": "tcp",
00:16:02.547      "traddr": "10.0.0.2",
00:16:02.547      "adrfam": "ipv4",
00:16:02.547      "trsvcid": "4420",
00:16:02.547      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:16:02.547      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:16:02.547      "hdgst": false,
00:16:02.547      "ddgst": false
00:16:02.547    },
00:16:02.547    "method": "bdev_nvme_attach_controller"
00:16:02.547  }'
00:16:02.547  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.547  [2024-12-16 06:26:19.358323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.547  [2024-12-16 06:26:19.358359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.547  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.547  [2024-12-16 06:26:19.370328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.547  [2024-12-16 06:26:19.370363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.547  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.547  [2024-12-16 06:26:19.382332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.547  [2024-12-16 06:26:19.382368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.547  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.547  [2024-12-16 06:26:19.394330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.547  [2024-12-16 06:26:19.394365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.547  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.547  [2024-12-16 06:26:19.399895] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:16:02.547  [2024-12-16 06:26:19.399986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75552 ]
00:16:02.547  [2024-12-16 06:26:19.406334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.547  [2024-12-16 06:26:19.406370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.547  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.547  [2024-12-16 06:26:19.418340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.547  [2024-12-16 06:26:19.418375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.547  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.547  [2024-12-16 06:26:19.430349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.547  [2024-12-16 06:26:19.430367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.547  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.547  [2024-12-16 06:26:19.442346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.547  [2024-12-16 06:26:19.442364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.547  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.547  [2024-12-16 06:26:19.454346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.547  [2024-12-16 06:26:19.454380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.547  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.547  [2024-12-16 06:26:19.466349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.547  [2024-12-16 06:26:19.466384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.547  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.547  [2024-12-16 06:26:19.478354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.547  [2024-12-16 06:26:19.478388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.548  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.548  [2024-12-16 06:26:19.490357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.548  [2024-12-16 06:26:19.490375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.548  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.548  [2024-12-16 06:26:19.502362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.548  [2024-12-16 06:26:19.502381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.548  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.548  [2024-12-16 06:26:19.514364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.548  [2024-12-16 06:26:19.514398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.548  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.808  [2024-12-16 06:26:19.526381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.808  [2024-12-16 06:26:19.526424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.808  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.808  [2024-12-16 06:26:19.533635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:02.808  [2024-12-16 06:26:19.538369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.808  [2024-12-16 06:26:19.538404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.808  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.808  [2024-12-16 06:26:19.550373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.808  [2024-12-16 06:26:19.550408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.808  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.808  [2024-12-16 06:26:19.562374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.808  [2024-12-16 06:26:19.562409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.808  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.808  [2024-12-16 06:26:19.574380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.808  [2024-12-16 06:26:19.574424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.808  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.808  [2024-12-16 06:26:19.586384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.808  [2024-12-16 06:26:19.586444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.808  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.808  [2024-12-16 06:26:19.598386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.808  [2024-12-16 06:26:19.598427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.808  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.808  [2024-12-16 06:26:19.610387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.808  [2024-12-16 06:26:19.610429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.808  [2024-12-16 06:26:19.613930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:16:02.808  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.808  [2024-12-16 06:26:19.622390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.808  [2024-12-16 06:26:19.622449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.808  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.808  [2024-12-16 06:26:19.634392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.808  [2024-12-16 06:26:19.634434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.808  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.808  [2024-12-16 06:26:19.646394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.808  [2024-12-16 06:26:19.646412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.808  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.808  [2024-12-16 06:26:19.658398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.808  [2024-12-16 06:26:19.658440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.808  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.808  [2024-12-16 06:26:19.670401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.808  [2024-12-16 06:26:19.670441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.808  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.808  [2024-12-16 06:26:19.682405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.808  [2024-12-16 06:26:19.682446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.808  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.808  [2024-12-16 06:26:19.694408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.808  [2024-12-16 06:26:19.694449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.808  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.808  [2024-12-16 06:26:19.706410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.808  [2024-12-16 06:26:19.706467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.808  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.808  [2024-12-16 06:26:19.718413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.808  [2024-12-16 06:26:19.718471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.808  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.808  [2024-12-16 06:26:19.730440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.809  [2024-12-16 06:26:19.730476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.809  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.809  [2024-12-16 06:26:19.742473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.809  [2024-12-16 06:26:19.742511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.809  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.809  [2024-12-16 06:26:19.754465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.809  [2024-12-16 06:26:19.754515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.809  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.809  [2024-12-16 06:26:19.766479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.809  [2024-12-16 06:26:19.766527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:02.809  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:02.809  [2024-12-16 06:26:19.778492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:02.809  [2024-12-16 06:26:19.778542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.069  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.069  [2024-12-16 06:26:19.790487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.069  [2024-12-16 06:26:19.790550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.069  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.069  [2024-12-16 06:26:19.802515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.069  [2024-12-16 06:26:19.802568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.069  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.069  Running I/O for 5 seconds...
00:16:03.069  [2024-12-16 06:26:19.814496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.069  [2024-12-16 06:26:19.814557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.069  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.069  [2024-12-16 06:26:19.829618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.069  [2024-12-16 06:26:19.829660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.069  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.069  [2024-12-16 06:26:19.844401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.069  [2024-12-16 06:26:19.844427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.069  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.069  [2024-12-16 06:26:19.855574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.069  [2024-12-16 06:26:19.855615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.069  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.069  [2024-12-16 06:26:19.871178] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.069  [2024-12-16 06:26:19.871204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.069  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.069  [2024-12-16 06:26:19.887945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.069  [2024-12-16 06:26:19.887973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.069  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.069  [2024-12-16 06:26:19.904217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.069  [2024-12-16 06:26:19.904243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.069  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.069  [2024-12-16 06:26:19.920782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.069  [2024-12-16 06:26:19.920809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.069  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.069  [2024-12-16 06:26:19.937387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.069  [2024-12-16 06:26:19.937414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.069  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.069  [2024-12-16 06:26:19.954239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.069  [2024-12-16 06:26:19.954266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.069  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.069  [2024-12-16 06:26:19.970472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.069  [2024-12-16 06:26:19.970525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.069  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.069  [2024-12-16 06:26:19.987072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.069  [2024-12-16 06:26:19.987098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.069  2024/12/16 06:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.069  [2024-12-16 06:26:20.005748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.069  [2024-12-16 06:26:20.005798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.069  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.069  [2024-12-16 06:26:20.019977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.069  [2024-12-16 06:26:20.020020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.069  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.069  [2024-12-16 06:26:20.033814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.069  [2024-12-16 06:26:20.033857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.069  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.329  [2024-12-16 06:26:20.049495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.329  [2024-12-16 06:26:20.049545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.329  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.329  [2024-12-16 06:26:20.067527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.329  [2024-12-16 06:26:20.067578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.329  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.329  [2024-12-16 06:26:20.082967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.329  [2024-12-16 06:26:20.082992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.329  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.329  [2024-12-16 06:26:20.100180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.329  [2024-12-16 06:26:20.100206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.329  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.329  [2024-12-16 06:26:20.116518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.329  [2024-12-16 06:26:20.116543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.329  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.329  [2024-12-16 06:26:20.133370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.329  [2024-12-16 06:26:20.133396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.329  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.329  [2024-12-16 06:26:20.150108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.329  [2024-12-16 06:26:20.150134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.329  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.329  [2024-12-16 06:26:20.166867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.329  [2024-12-16 06:26:20.166910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.329  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.329  [2024-12-16 06:26:20.181668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.329  [2024-12-16 06:26:20.181713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.329  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.329  [2024-12-16 06:26:20.197310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.329  [2024-12-16 06:26:20.197353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.329  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.329  [2024-12-16 06:26:20.214668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.329  [2024-12-16 06:26:20.214720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.329  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.329  [2024-12-16 06:26:20.231218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.329  [2024-12-16 06:26:20.231243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.329  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.329  [2024-12-16 06:26:20.247535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.329  [2024-12-16 06:26:20.247576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.329  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.329  [2024-12-16 06:26:20.264136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.329  [2024-12-16 06:26:20.264162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.329  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.329  [2024-12-16 06:26:20.280855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.329  [2024-12-16 06:26:20.280895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.330  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.330  [2024-12-16 06:26:20.298256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.330  [2024-12-16 06:26:20.298283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.330  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.589  [2024-12-16 06:26:20.313886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.589  [2024-12-16 06:26:20.313913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.589  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.589  [2024-12-16 06:26:20.325184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.589  [2024-12-16 06:26:20.325210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.589  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.589  [2024-12-16 06:26:20.341186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.589  [2024-12-16 06:26:20.341212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.589  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.589  [2024-12-16 06:26:20.357328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.589  [2024-12-16 06:26:20.357355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.589  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.589  [2024-12-16 06:26:20.374719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.589  [2024-12-16 06:26:20.374747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.589  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.589  [2024-12-16 06:26:20.391076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.589  [2024-12-16 06:26:20.391102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.589  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.589  [2024-12-16 06:26:20.407386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.589  [2024-12-16 06:26:20.407413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.589  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.589  [2024-12-16 06:26:20.423573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.589  [2024-12-16 06:26:20.423614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.589  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.589  [2024-12-16 06:26:20.440625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.589  [2024-12-16 06:26:20.440660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.589  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.589  [2024-12-16 06:26:20.456395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.589  [2024-12-16 06:26:20.456421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.590  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.590  [2024-12-16 06:26:20.471539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.590  [2024-12-16 06:26:20.471573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.590  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.590  [2024-12-16 06:26:20.483099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.590  [2024-12-16 06:26:20.483125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.590  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.590  [2024-12-16 06:26:20.498583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.590  [2024-12-16 06:26:20.498626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.590  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.590  [2024-12-16 06:26:20.514612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.590  [2024-12-16 06:26:20.514656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.590  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.590  [2024-12-16 06:26:20.531135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.590  [2024-12-16 06:26:20.531162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.590  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.590  [2024-12-16 06:26:20.547961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.590  [2024-12-16 06:26:20.547987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.590  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.849  [2024-12-16 06:26:20.564443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.849  [2024-12-16 06:26:20.564527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.849  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.849  [2024-12-16 06:26:20.581409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.849  [2024-12-16 06:26:20.581435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.849  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.849  [2024-12-16 06:26:20.597456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.849  [2024-12-16 06:26:20.597495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.849  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.849  [2024-12-16 06:26:20.614101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.849  [2024-12-16 06:26:20.614128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.849  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.849  [2024-12-16 06:26:20.630677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.849  [2024-12-16 06:26:20.630720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.850  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.850  [2024-12-16 06:26:20.646882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.850  [2024-12-16 06:26:20.646908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.850  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.850  [2024-12-16 06:26:20.663739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.850  [2024-12-16 06:26:20.663780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.850  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.850  [2024-12-16 06:26:20.680150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.850  [2024-12-16 06:26:20.680176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.850  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.850  [2024-12-16 06:26:20.696606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.850  [2024-12-16 06:26:20.696632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.850  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.850  [2024-12-16 06:26:20.712485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.850  [2024-12-16 06:26:20.712521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.850  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.850  [2024-12-16 06:26:20.723118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.850  [2024-12-16 06:26:20.723144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.850  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.850  [2024-12-16 06:26:20.738781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.850  [2024-12-16 06:26:20.738823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.850  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.850  [2024-12-16 06:26:20.755542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.850  [2024-12-16 06:26:20.755568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.850  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.850  [2024-12-16 06:26:20.771570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.850  [2024-12-16 06:26:20.771595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.850  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.850  [2024-12-16 06:26:20.787638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.850  [2024-12-16 06:26:20.787680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.850  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.850  [2024-12-16 06:26:20.804108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.850  [2024-12-16 06:26:20.804150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.850  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:03.850  [2024-12-16 06:26:20.820425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:03.850  [2024-12-16 06:26:20.820468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:03.850  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.110  [2024-12-16 06:26:20.836343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.110  [2024-12-16 06:26:20.836370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.110  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.110  [2024-12-16 06:26:20.847351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.110  [2024-12-16 06:26:20.847377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.110  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.110  [2024-12-16 06:26:20.862867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.110  [2024-12-16 06:26:20.862894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.110  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.110  [2024-12-16 06:26:20.879243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.110  [2024-12-16 06:26:20.879269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.110  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.110  [2024-12-16 06:26:20.895030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.110  [2024-12-16 06:26:20.895055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.110  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.110  [2024-12-16 06:26:20.908604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.110  [2024-12-16 06:26:20.908630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.110  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.110  [2024-12-16 06:26:20.923870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.110  [2024-12-16 06:26:20.923897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.110  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.110  [2024-12-16 06:26:20.940562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.110  [2024-12-16 06:26:20.940588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.110  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.110  [2024-12-16 06:26:20.956944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.110  [2024-12-16 06:26:20.956971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.110  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.110  [2024-12-16 06:26:20.973148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.110  [2024-12-16 06:26:20.973174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.110  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.110  [2024-12-16 06:26:20.989227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.110  [2024-12-16 06:26:20.989253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.110  2024/12/16 06:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.110  [2024-12-16 06:26:21.005608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.110  [2024-12-16 06:26:21.005634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.110  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.110  [2024-12-16 06:26:21.022372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.110  [2024-12-16 06:26:21.022400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.110  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.111  [2024-12-16 06:26:21.038689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.111  [2024-12-16 06:26:21.038732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.111  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.111  [2024-12-16 06:26:21.055312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.111  [2024-12-16 06:26:21.055354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.111  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.111  [2024-12-16 06:26:21.071632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.111  [2024-12-16 06:26:21.071657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.111  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.387  [2024-12-16 06:26:21.089004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.387  [2024-12-16 06:26:21.089061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.387  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.387  [2024-12-16 06:26:21.104220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.387  [2024-12-16 06:26:21.104246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.387  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.387  [2024-12-16 06:26:21.120622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.387  [2024-12-16 06:26:21.120648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.387  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.387  [2024-12-16 06:26:21.137377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.387  [2024-12-16 06:26:21.137403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.387  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.387  [2024-12-16 06:26:21.154614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.387  [2024-12-16 06:26:21.154641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.387  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.387  [2024-12-16 06:26:21.170633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.387  [2024-12-16 06:26:21.170676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.387  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.387  [2024-12-16 06:26:21.187790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.387  [2024-12-16 06:26:21.187832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.387  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.387  [2024-12-16 06:26:21.204097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.387  [2024-12-16 06:26:21.204123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.387  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.387  [2024-12-16 06:26:21.220882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.387  [2024-12-16 06:26:21.220909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.387  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.387  [2024-12-16 06:26:21.237038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.387  [2024-12-16 06:26:21.237081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.387  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.387  [2024-12-16 06:26:21.253414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.387  [2024-12-16 06:26:21.253441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.387  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.387  [2024-12-16 06:26:21.270410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.387  [2024-12-16 06:26:21.270476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.387  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.387  [2024-12-16 06:26:21.286186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.387  [2024-12-16 06:26:21.286212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.387  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.387  [2024-12-16 06:26:21.303125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.387  [2024-12-16 06:26:21.303152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.387  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.387  [2024-12-16 06:26:21.319365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.387  [2024-12-16 06:26:21.319392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.387  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.387  [2024-12-16 06:26:21.336800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.387  [2024-12-16 06:26:21.336860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.387  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.387  [2024-12-16 06:26:21.351211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.387  [2024-12-16 06:26:21.351252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.387  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.648  [2024-12-16 06:26:21.367588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.648  [2024-12-16 06:26:21.367623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.648  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.648  [2024-12-16 06:26:21.383920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.648  [2024-12-16 06:26:21.383962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.648  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.648  [2024-12-16 06:26:21.400459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.648  [2024-12-16 06:26:21.400511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.648  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.648  [2024-12-16 06:26:21.417283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.648  [2024-12-16 06:26:21.417324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.648  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.648  [2024-12-16 06:26:21.434638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.648  [2024-12-16 06:26:21.434666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.648  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.648  [2024-12-16 06:26:21.449669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.648  [2024-12-16 06:26:21.449713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.648  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.648  [2024-12-16 06:26:21.467418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.648  [2024-12-16 06:26:21.467462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.648  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.648  [2024-12-16 06:26:21.483308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.648  [2024-12-16 06:26:21.483349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.648  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.648  [2024-12-16 06:26:21.500331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.648  [2024-12-16 06:26:21.500373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.648  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.648  [2024-12-16 06:26:21.516752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.648  [2024-12-16 06:26:21.516796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.648  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.648  [2024-12-16 06:26:21.533196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.648  [2024-12-16 06:26:21.533223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.648  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.648  [2024-12-16 06:26:21.549768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.648  [2024-12-16 06:26:21.549794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.648  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.648  [2024-12-16 06:26:21.566891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.648  [2024-12-16 06:26:21.566917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.648  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.648  [2024-12-16 06:26:21.582963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.648  [2024-12-16 06:26:21.583005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.648  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.648  [2024-12-16 06:26:21.600039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.648  [2024-12-16 06:26:21.600081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.648  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.648  [2024-12-16 06:26:21.615642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.648  [2024-12-16 06:26:21.615683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.648  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.908  [2024-12-16 06:26:21.632874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.908  [2024-12-16 06:26:21.632900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.908  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.908  [2024-12-16 06:26:21.649429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.908  [2024-12-16 06:26:21.649456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.908  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.908  [2024-12-16 06:26:21.666103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.908  [2024-12-16 06:26:21.666129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.908  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.908  [2024-12-16 06:26:21.682287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.908  [2024-12-16 06:26:21.682329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.908  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.908  [2024-12-16 06:26:21.698627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.908  [2024-12-16 06:26:21.698670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.908  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.908  [2024-12-16 06:26:21.715031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.908  [2024-12-16 06:26:21.715057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.908  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.908  [2024-12-16 06:26:21.731913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.908  [2024-12-16 06:26:21.731939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.908  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.908  [2024-12-16 06:26:21.747679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.908  [2024-12-16 06:26:21.747716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.908  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.908  [2024-12-16 06:26:21.759129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.908  [2024-12-16 06:26:21.759155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.908  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.908  [2024-12-16 06:26:21.774941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.908  [2024-12-16 06:26:21.774967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.908  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.908  [2024-12-16 06:26:21.791459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.908  [2024-12-16 06:26:21.791525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.908  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.908  [2024-12-16 06:26:21.808044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.908  [2024-12-16 06:26:21.808070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.908  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.908  [2024-12-16 06:26:21.824410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.908  [2024-12-16 06:26:21.824452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.908  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.909  [2024-12-16 06:26:21.840839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.909  [2024-12-16 06:26:21.840881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.909  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.909  [2024-12-16 06:26:21.858132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.909  [2024-12-16 06:26:21.858175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.909  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:04.909  [2024-12-16 06:26:21.874987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:04.909  [2024-12-16 06:26:21.875013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:04.909  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.169  [2024-12-16 06:26:21.891632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.169  [2024-12-16 06:26:21.891673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.169  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.169  [2024-12-16 06:26:21.908670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.169  [2024-12-16 06:26:21.908712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.169  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.169  [2024-12-16 06:26:21.925205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.169  [2024-12-16 06:26:21.925232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.169  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.169  [2024-12-16 06:26:21.941392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.169  [2024-12-16 06:26:21.941434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.169  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.169  [2024-12-16 06:26:21.958738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.169  [2024-12-16 06:26:21.958812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.169  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.169  [2024-12-16 06:26:21.974877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.169  [2024-12-16 06:26:21.974903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.169  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.169  [2024-12-16 06:26:21.991405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.169  [2024-12-16 06:26:21.991447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.169  2024/12/16 06:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.169  [2024-12-16 06:26:22.008281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.169  [2024-12-16 06:26:22.008307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.169  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.169  [2024-12-16 06:26:22.024366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.169  [2024-12-16 06:26:22.024393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.169  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.169  [2024-12-16 06:26:22.041030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.169  [2024-12-16 06:26:22.041072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.169  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.169  [2024-12-16 06:26:22.058093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.169  [2024-12-16 06:26:22.058134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.169  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.169  [2024-12-16 06:26:22.073360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.169  [2024-12-16 06:26:22.073387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.169  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.169  [2024-12-16 06:26:22.089584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.169  [2024-12-16 06:26:22.089625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.169  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.169  [2024-12-16 06:26:22.106355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.169  [2024-12-16 06:26:22.106397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.169  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.169  [2024-12-16 06:26:22.122133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.169  [2024-12-16 06:26:22.122160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.169  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.169  [2024-12-16 06:26:22.139149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.169  [2024-12-16 06:26:22.139192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.169  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.429  [2024-12-16 06:26:22.154857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.429  [2024-12-16 06:26:22.154882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.429  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.429  [2024-12-16 06:26:22.169625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.429  [2024-12-16 06:26:22.169668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.429  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.429  [2024-12-16 06:26:22.180883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.429  [2024-12-16 06:26:22.180910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.429  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.429  [2024-12-16 06:26:22.196640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.429  [2024-12-16 06:26:22.196682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.429  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.429  [2024-12-16 06:26:22.212240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.429  [2024-12-16 06:26:22.212266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.429  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.429  [2024-12-16 06:26:22.223958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.429  [2024-12-16 06:26:22.223983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.429  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.429  [2024-12-16 06:26:22.238361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.429  [2024-12-16 06:26:22.238387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.429  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.429  [2024-12-16 06:26:22.252959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.429  [2024-12-16 06:26:22.252985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.429  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.429  [2024-12-16 06:26:22.265389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.429  [2024-12-16 06:26:22.265415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.429  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.429  [2024-12-16 06:26:22.280464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.429  [2024-12-16 06:26:22.280501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.429  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.429  [2024-12-16 06:26:22.296686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.429  [2024-12-16 06:26:22.296712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.429  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.429  [2024-12-16 06:26:22.313602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.429  [2024-12-16 06:26:22.313627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.429  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.429  [2024-12-16 06:26:22.330495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.429  [2024-12-16 06:26:22.330535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.429  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.430  [2024-12-16 06:26:22.346762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.430  [2024-12-16 06:26:22.346804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.430  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.430  [2024-12-16 06:26:22.363974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.430  [2024-12-16 06:26:22.364000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.430  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.430  [2024-12-16 06:26:22.379387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.430  [2024-12-16 06:26:22.379413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.430  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.430  [2024-12-16 06:26:22.390784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.430  [2024-12-16 06:26:22.390826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.430  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.690  [2024-12-16 06:26:22.406541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.690  [2024-12-16 06:26:22.406571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.690  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.690  [2024-12-16 06:26:22.422266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.690  [2024-12-16 06:26:22.422309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.690  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.690  [2024-12-16 06:26:22.439403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.690  [2024-12-16 06:26:22.439429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.690  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.690  [2024-12-16 06:26:22.455803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.690  [2024-12-16 06:26:22.455845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.690  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.690  [2024-12-16 06:26:22.472165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.690  [2024-12-16 06:26:22.472207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.690  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.690  [2024-12-16 06:26:22.489439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.690  [2024-12-16 06:26:22.489482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.690  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.690  [2024-12-16 06:26:22.506333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.690  [2024-12-16 06:26:22.506358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.690  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.690  [2024-12-16 06:26:22.521874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.690  [2024-12-16 06:26:22.521917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.690  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.690  [2024-12-16 06:26:22.536571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.690  [2024-12-16 06:26:22.536612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.690  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.690  [2024-12-16 06:26:22.552443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.690  [2024-12-16 06:26:22.552484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.690  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.690  [2024-12-16 06:26:22.569031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.690  [2024-12-16 06:26:22.569073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.690  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.690  [2024-12-16 06:26:22.585741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.690  [2024-12-16 06:26:22.585785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.690  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.690  [2024-12-16 06:26:22.603044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.690  [2024-12-16 06:26:22.603086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.690  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.690  [2024-12-16 06:26:22.618130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.690  [2024-12-16 06:26:22.618171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.690  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.690  [2024-12-16 06:26:22.635592] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.690  [2024-12-16 06:26:22.635632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.690  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.690  [2024-12-16 06:26:22.651616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.690  [2024-12-16 06:26:22.651641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.690  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.968  [2024-12-16 06:26:22.668967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.968  [2024-12-16 06:26:22.669009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.968  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.968  [2024-12-16 06:26:22.684201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.968  [2024-12-16 06:26:22.684244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.968  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.968  [2024-12-16 06:26:22.700856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.968  [2024-12-16 06:26:22.700882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.968  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.968  [2024-12-16 06:26:22.717118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.968  [2024-12-16 06:26:22.717144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.968  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.968  [2024-12-16 06:26:22.734103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.968  [2024-12-16 06:26:22.734145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.969  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.969  [2024-12-16 06:26:22.750345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.969  [2024-12-16 06:26:22.750387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.969  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.969  [2024-12-16 06:26:22.767256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.969  [2024-12-16 06:26:22.767282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.969  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.969  [2024-12-16 06:26:22.783541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.969  [2024-12-16 06:26:22.783581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.969  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.969  [2024-12-16 06:26:22.799926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.969  [2024-12-16 06:26:22.799968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.969  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.969  [2024-12-16 06:26:22.816226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.969  [2024-12-16 06:26:22.816253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.969  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.969  [2024-12-16 06:26:22.827557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.969  [2024-12-16 06:26:22.827599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.969  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.969  [2024-12-16 06:26:22.843983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.969  [2024-12-16 06:26:22.844009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.969  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.969  [2024-12-16 06:26:22.859910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.969  [2024-12-16 06:26:22.859936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.969  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.969  [2024-12-16 06:26:22.876575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.969  [2024-12-16 06:26:22.876600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.969  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.969  [2024-12-16 06:26:22.892808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.969  [2024-12-16 06:26:22.892852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.969  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.969  [2024-12-16 06:26:22.910082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.969  [2024-12-16 06:26:22.910108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:05.969  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:05.969  [2024-12-16 06:26:22.926789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:05.969  [2024-12-16 06:26:22.926835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.263  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.263  [2024-12-16 06:26:22.943018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.263  [2024-12-16 06:26:22.943061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.263  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.263  [2024-12-16 06:26:22.959735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.263  [2024-12-16 06:26:22.959777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.263  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.263  [2024-12-16 06:26:22.976096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.263  [2024-12-16 06:26:22.976122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.263  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.263  [2024-12-16 06:26:22.993004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.263  [2024-12-16 06:26:22.993030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.263  2024/12/16 06:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.263  [2024-12-16 06:26:23.009592] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.263  [2024-12-16 06:26:23.009619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.263  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.263  [2024-12-16 06:26:23.026198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.263  [2024-12-16 06:26:23.026240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.263  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.263  [2024-12-16 06:26:23.043079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.263  [2024-12-16 06:26:23.043105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.263  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.263  [2024-12-16 06:26:23.059597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.263  [2024-12-16 06:26:23.059637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.263  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.263  [2024-12-16 06:26:23.075816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.263  [2024-12-16 06:26:23.075858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.264  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.264  [2024-12-16 06:26:23.092403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.264  [2024-12-16 06:26:23.092429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.264  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.264  [2024-12-16 06:26:23.108525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.264  [2024-12-16 06:26:23.108550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.264  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.264  [2024-12-16 06:26:23.125271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.264  [2024-12-16 06:26:23.125297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.264  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.264  [2024-12-16 06:26:23.141335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.264  [2024-12-16 06:26:23.141377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.264  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.264  [2024-12-16 06:26:23.152464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.264  [2024-12-16 06:26:23.152534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.264  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.264  [2024-12-16 06:26:23.167512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.264  [2024-12-16 06:26:23.167563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.264  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.264  [2024-12-16 06:26:23.184309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.264  [2024-12-16 06:26:23.184335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.264  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.264  [2024-12-16 06:26:23.200217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.264  [2024-12-16 06:26:23.200243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.264  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.264  [2024-12-16 06:26:23.211439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.264  [2024-12-16 06:26:23.211464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.264  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.264  [2024-12-16 06:26:23.226745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.264  [2024-12-16 06:26:23.226787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.264  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.524  [2024-12-16 06:26:23.243892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.524  [2024-12-16 06:26:23.243918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.524  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.524  [2024-12-16 06:26:23.258664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.524  [2024-12-16 06:26:23.258707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.524  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.524  [2024-12-16 06:26:23.274320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.524  [2024-12-16 06:26:23.274347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.524  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.524  [2024-12-16 06:26:23.291212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.524  [2024-12-16 06:26:23.291237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.524  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.524  [2024-12-16 06:26:23.307689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.524  [2024-12-16 06:26:23.307726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.524  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.524  [2024-12-16 06:26:23.324521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.524  [2024-12-16 06:26:23.324562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.524  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.524  [2024-12-16 06:26:23.339431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.524  [2024-12-16 06:26:23.339457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.524  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.524  [2024-12-16 06:26:23.353895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.524  [2024-12-16 06:26:23.353921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.524  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.524  [2024-12-16 06:26:23.365995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.524  [2024-12-16 06:26:23.366021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.524  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.524  [2024-12-16 06:26:23.380659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.524  [2024-12-16 06:26:23.380684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.524  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.524  [2024-12-16 06:26:23.395363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.524  [2024-12-16 06:26:23.395389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.524  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.524  [2024-12-16 06:26:23.405901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.524  [2024-12-16 06:26:23.405926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.524  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.524  [2024-12-16 06:26:23.421410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.524  [2024-12-16 06:26:23.421437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.524  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.524  [2024-12-16 06:26:23.438297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.524  [2024-12-16 06:26:23.438340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.524  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.524  [2024-12-16 06:26:23.454681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.524  [2024-12-16 06:26:23.454738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.524  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.524  [2024-12-16 06:26:23.471076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.524  [2024-12-16 06:26:23.471101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.524  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.525  [2024-12-16 06:26:23.487562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.525  [2024-12-16 06:26:23.487603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.525  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.783  [2024-12-16 06:26:23.505344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.783  [2024-12-16 06:26:23.505387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.783  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.783  [2024-12-16 06:26:23.519648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.783  [2024-12-16 06:26:23.519693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.783  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.783  [2024-12-16 06:26:23.535181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.783  [2024-12-16 06:26:23.535207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.783  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.783  [2024-12-16 06:26:23.552360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.783  [2024-12-16 06:26:23.552386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.783  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.783  [2024-12-16 06:26:23.568855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.783  [2024-12-16 06:26:23.568880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.783  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.783  [2024-12-16 06:26:23.585440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.783  [2024-12-16 06:26:23.585466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.783  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.783  [2024-12-16 06:26:23.602020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.783  [2024-12-16 06:26:23.602046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.783  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.784  [2024-12-16 06:26:23.618810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.784  [2024-12-16 06:26:23.618868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.784  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.784  [2024-12-16 06:26:23.634268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.784  [2024-12-16 06:26:23.634311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.784  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.784  [2024-12-16 06:26:23.645223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.784  [2024-12-16 06:26:23.645267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.784  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.784  [2024-12-16 06:26:23.662035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.784  [2024-12-16 06:26:23.662076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.784  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.784  [2024-12-16 06:26:23.677252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.784  [2024-12-16 06:26:23.677294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.784  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.784  [2024-12-16 06:26:23.693536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.784  [2024-12-16 06:26:23.693586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.784  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.784  [2024-12-16 06:26:23.710878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.784  [2024-12-16 06:26:23.710919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.784  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.784  [2024-12-16 06:26:23.726973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.784  [2024-12-16 06:26:23.727017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.784  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:06.784  [2024-12-16 06:26:23.743727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:06.784  [2024-12-16 06:26:23.743769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:06.784  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.043  [2024-12-16 06:26:23.760930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.043  [2024-12-16 06:26:23.760972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.043  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.043  [2024-12-16 06:26:23.775940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.043  [2024-12-16 06:26:23.775982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.044  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.044  [2024-12-16 06:26:23.791912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.044  [2024-12-16 06:26:23.791969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.044  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.044  [2024-12-16 06:26:23.808443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.044  [2024-12-16 06:26:23.808469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.044  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.044  [2024-12-16 06:26:23.825186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.044  [2024-12-16 06:26:23.825212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.044  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.044  [2024-12-16 06:26:23.840840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.044  [2024-12-16 06:26:23.840866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.044  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.044  [2024-12-16 06:26:23.856634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.044  [2024-12-16 06:26:23.856675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.044  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.044  [2024-12-16 06:26:23.872985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.044  [2024-12-16 06:26:23.873011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.044  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.044  [2024-12-16 06:26:23.889569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.044  [2024-12-16 06:26:23.889595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.044  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.044  [2024-12-16 06:26:23.906258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.044  [2024-12-16 06:26:23.906299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.044  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.044  [2024-12-16 06:26:23.922678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.044  [2024-12-16 06:26:23.922706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.044  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.044  [2024-12-16 06:26:23.939657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.044  [2024-12-16 06:26:23.939699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.044  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.044  [2024-12-16 06:26:23.955587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.044  [2024-12-16 06:26:23.955613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.044  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.044  [2024-12-16 06:26:23.972422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.044  [2024-12-16 06:26:23.972449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.044  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.044  [2024-12-16 06:26:23.989044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.044  [2024-12-16 06:26:23.989071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.044  2024/12/16 06:26:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.044  [2024-12-16 06:26:24.006071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.044  [2024-12-16 06:26:24.006097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.044  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.304  [2024-12-16 06:26:24.023304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.304  [2024-12-16 06:26:24.023330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.304  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.304  [2024-12-16 06:26:24.037917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.304  [2024-12-16 06:26:24.037960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.304  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.304  [2024-12-16 06:26:24.052781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.304  [2024-12-16 06:26:24.052823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.305  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.305  [2024-12-16 06:26:24.068888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.305  [2024-12-16 06:26:24.068913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.305  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.305  [2024-12-16 06:26:24.080070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.305  [2024-12-16 06:26:24.080111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.305  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.305  [2024-12-16 06:26:24.095732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.305  [2024-12-16 06:26:24.095758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.305  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.305  [2024-12-16 06:26:24.111884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.305  [2024-12-16 06:26:24.111909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.305  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.305  [2024-12-16 06:26:24.128694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.305  [2024-12-16 06:26:24.128736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.305  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.305  [2024-12-16 06:26:24.145413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.305  [2024-12-16 06:26:24.145455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.305  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.305  [2024-12-16 06:26:24.160986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.305  [2024-12-16 06:26:24.161029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.305  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.305  [2024-12-16 06:26:24.172447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.305  [2024-12-16 06:26:24.172488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.305  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.305  [2024-12-16 06:26:24.187599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.305  [2024-12-16 06:26:24.187625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.305  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.305  [2024-12-16 06:26:24.203683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.305  [2024-12-16 06:26:24.203725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.305  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.305  [2024-12-16 06:26:24.215589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.305  [2024-12-16 06:26:24.215615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.305  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.305  [2024-12-16 06:26:24.231746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.305  [2024-12-16 06:26:24.231772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.305  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.305  [2024-12-16 06:26:24.248204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.305  [2024-12-16 06:26:24.248232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.305  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.305  [2024-12-16 06:26:24.264467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.305  [2024-12-16 06:26:24.264503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.305  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.564  [2024-12-16 06:26:24.281397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.564  [2024-12-16 06:26:24.281423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.564  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.565  [2024-12-16 06:26:24.296904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.565  [2024-12-16 06:26:24.296930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.565  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.565  [2024-12-16 06:26:24.313699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.565  [2024-12-16 06:26:24.313724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.565  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.565  [2024-12-16 06:26:24.330129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.565  [2024-12-16 06:26:24.330170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.565  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.565  [2024-12-16 06:26:24.347124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.565  [2024-12-16 06:26:24.347150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.565  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.565  [2024-12-16 06:26:24.363266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.565  [2024-12-16 06:26:24.363293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.565  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.565  [2024-12-16 06:26:24.379391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.565  [2024-12-16 06:26:24.379418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.565  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.565  [2024-12-16 06:26:24.395641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.565  [2024-12-16 06:26:24.395667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.565  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.565  [2024-12-16 06:26:24.412020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.565  [2024-12-16 06:26:24.412046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.565  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.565  [2024-12-16 06:26:24.423706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.565  [2024-12-16 06:26:24.423747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.565  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.565  [2024-12-16 06:26:24.440008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.565  [2024-12-16 06:26:24.440034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.565  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.565  [2024-12-16 06:26:24.456114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.565  [2024-12-16 06:26:24.456141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.565  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.565  [2024-12-16 06:26:24.472685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.565  [2024-12-16 06:26:24.472711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.565  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.565  [2024-12-16 06:26:24.489129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.565  [2024-12-16 06:26:24.489171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.565  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.565  [2024-12-16 06:26:24.505731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.565  [2024-12-16 06:26:24.505757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.565  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.565  [2024-12-16 06:26:24.523545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.565  [2024-12-16 06:26:24.523599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.565  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.565  [2024-12-16 06:26:24.538241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.565  [2024-12-16 06:26:24.538282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.824  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.824  [2024-12-16 06:26:24.555097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.824  [2024-12-16 06:26:24.555139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.824  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.824  [2024-12-16 06:26:24.571388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.824  [2024-12-16 06:26:24.571414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.824  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.824  [2024-12-16 06:26:24.587744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.824  [2024-12-16 06:26:24.587770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.824  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.824  [2024-12-16 06:26:24.604439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.824  [2024-12-16 06:26:24.604464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.824  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.824  [2024-12-16 06:26:24.620483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.824  [2024-12-16 06:26:24.620550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.824  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.824  [2024-12-16 06:26:24.637021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.824  [2024-12-16 06:26:24.637047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.824  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.824  [2024-12-16 06:26:24.653211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.824  [2024-12-16 06:26:24.653237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.824  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.824  [2024-12-16 06:26:24.670222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.824  [2024-12-16 06:26:24.670250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.824  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.824  [2024-12-16 06:26:24.686764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.824  [2024-12-16 06:26:24.686808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.824  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.824  [2024-12-16 06:26:24.703213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.824  [2024-12-16 06:26:24.703238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.824  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.824  [2024-12-16 06:26:24.718978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.824  [2024-12-16 06:26:24.719004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.824  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.824  [2024-12-16 06:26:24.734045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.824  [2024-12-16 06:26:24.734071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.824  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.824  [2024-12-16 06:26:24.751945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.824  [2024-12-16 06:26:24.751988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.824  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.824  [2024-12-16 06:26:24.766477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.824  [2024-12-16 06:26:24.766531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.824  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:07.824  [2024-12-16 06:26:24.782212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:07.824  [2024-12-16 06:26:24.782238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:07.824  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.084  [2024-12-16 06:26:24.799987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.084  [2024-12-16 06:26:24.800029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.084  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.084  [2024-12-16 06:26:24.813381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.084  [2024-12-16 06:26:24.813423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.084  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.084  
00:16:08.084                                                                                                  Latency(us)
00:16:08.084  
[2024-12-16T06:26:25.060Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:16:08.084  
[2024-12-16T06:26:25.060Z]  Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192)
00:16:08.084  	 Nvme1n1             :       5.01   13802.41     107.83       0.00     0.00    9262.80    3902.37   19541.64
00:16:08.084  
[2024-12-16T06:26:25.060Z]  ===================================================================================================================
00:16:08.084  
[2024-12-16T06:26:25.060Z]  Total                       :              13802.41     107.83       0.00     0.00    9262.80    3902.37   19541.64
00:16:08.084  [2024-12-16 06:26:24.823044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.084  [2024-12-16 06:26:24.823084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.084  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.084  [2024-12-16 06:26:24.835089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.084  [2024-12-16 06:26:24.835114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.084  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.084  [2024-12-16 06:26:24.847064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.084  [2024-12-16 06:26:24.847101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.084  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.084  [2024-12-16 06:26:24.859062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.084  [2024-12-16 06:26:24.859081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.084  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.084  [2024-12-16 06:26:24.871063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.084  [2024-12-16 06:26:24.871082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.084  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.084  [2024-12-16 06:26:24.883068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.084  [2024-12-16 06:26:24.883102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.084  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.084  [2024-12-16 06:26:24.895073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.084  [2024-12-16 06:26:24.895108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.084  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.084  [2024-12-16 06:26:24.907077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.084  [2024-12-16 06:26:24.907096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.084  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.084  [2024-12-16 06:26:24.919080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.084  [2024-12-16 06:26:24.919114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.084  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.084  [2024-12-16 06:26:24.931085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.084  [2024-12-16 06:26:24.931104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.084  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.084  [2024-12-16 06:26:24.943087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.084  [2024-12-16 06:26:24.943121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.084  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.084  [2024-12-16 06:26:24.955075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.084  [2024-12-16 06:26:24.955108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.084  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.084  [2024-12-16 06:26:24.967077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.084  [2024-12-16 06:26:24.967110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.085  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.085  [2024-12-16 06:26:24.979079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.085  [2024-12-16 06:26:24.979112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.085  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.085  [2024-12-16 06:26:24.991081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.085  [2024-12-16 06:26:24.991098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.085  2024/12/16 06:26:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.085  [2024-12-16 06:26:25.003086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.085  [2024-12-16 06:26:25.003118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.085  2024/12/16 06:26:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.085  [2024-12-16 06:26:25.015088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.085  [2024-12-16 06:26:25.015122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.085  2024/12/16 06:26:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.085  [2024-12-16 06:26:25.027090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.085  [2024-12-16 06:26:25.027108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.085  2024/12/16 06:26:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.085  [2024-12-16 06:26:25.039092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.085  [2024-12-16 06:26:25.039126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.085  2024/12/16 06:26:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.085  [2024-12-16 06:26:25.051094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.085  [2024-12-16 06:26:25.051111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.085  2024/12/16 06:26:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.343  [2024-12-16 06:26:25.063114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.343  [2024-12-16 06:26:25.063148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.343  2024/12/16 06:26:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.343  [2024-12-16 06:26:25.075101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.343  [2024-12-16 06:26:25.075135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.343  2024/12/16 06:26:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.343  [2024-12-16 06:26:25.087105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.343  [2024-12-16 06:26:25.087139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.343  2024/12/16 06:26:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.343  [2024-12-16 06:26:25.099108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.343  [2024-12-16 06:26:25.099142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.343  2024/12/16 06:26:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.343  [2024-12-16 06:26:25.111135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.343  [2024-12-16 06:26:25.111170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.343  2024/12/16 06:26:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.343  [2024-12-16 06:26:25.123120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:16:08.343  [2024-12-16 06:26:25.123154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:08.343  2024/12/16 06:26:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:08.343  /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (75552) - No such process
00:16:08.343   06:26:25	-- target/zcopy.sh@49 -- # wait 75552
00:16:08.343   06:26:25	-- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:16:08.343   06:26:25	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:08.343   06:26:25	-- common/autotest_common.sh@10 -- # set +x
00:16:08.343   06:26:25	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:08.343   06:26:25	-- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:16:08.343   06:26:25	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:08.343   06:26:25	-- common/autotest_common.sh@10 -- # set +x
00:16:08.343  delay0
00:16:08.343   06:26:25	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:08.343   06:26:25	-- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1
00:16:08.343   06:26:25	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:08.343   06:26:25	-- common/autotest_common.sh@10 -- # set +x
00:16:08.343   06:26:25	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:08.343   06:26:25	-- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1'
00:16:08.343  [2024-12-16 06:26:25.310859] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral
00:16:14.958  Initializing NVMe Controllers
00:16:14.958  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:16:14.958  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:16:14.958  Initialization complete. Launching workers.
00:16:14.958  NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 82
00:16:14.958  CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 369, failed to submit 33
00:16:14.958  	 success 192, unsuccess 177, failed 0
00:16:14.958   06:26:31	-- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT
00:16:14.958   06:26:31	-- target/zcopy.sh@60 -- # nvmftestfini
00:16:14.958   06:26:31	-- nvmf/common.sh@476 -- # nvmfcleanup
00:16:14.958   06:26:31	-- nvmf/common.sh@116 -- # sync
00:16:14.958   06:26:31	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:16:14.958   06:26:31	-- nvmf/common.sh@119 -- # set +e
00:16:14.958   06:26:31	-- nvmf/common.sh@120 -- # for i in {1..20}
00:16:14.958   06:26:31	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:16:14.958  rmmod nvme_tcp
00:16:14.958  rmmod nvme_fabrics
00:16:14.958  rmmod nvme_keyring
00:16:14.958   06:26:31	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:16:14.958   06:26:31	-- nvmf/common.sh@123 -- # set -e
00:16:14.958   06:26:31	-- nvmf/common.sh@124 -- # return 0
00:16:14.958   06:26:31	-- nvmf/common.sh@477 -- # '[' -n 75384 ']'
00:16:14.958   06:26:31	-- nvmf/common.sh@478 -- # killprocess 75384
00:16:14.958   06:26:31	-- common/autotest_common.sh@936 -- # '[' -z 75384 ']'
00:16:14.958   06:26:31	-- common/autotest_common.sh@940 -- # kill -0 75384
00:16:14.958    06:26:31	-- common/autotest_common.sh@941 -- # uname
00:16:14.958   06:26:31	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:16:14.958    06:26:31	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75384
00:16:14.958   06:26:31	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:16:14.958  killing process with pid 75384
00:16:14.958   06:26:31	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:16:14.958   06:26:31	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 75384'
00:16:14.958   06:26:31	-- common/autotest_common.sh@955 -- # kill 75384
00:16:14.958   06:26:31	-- common/autotest_common.sh@960 -- # wait 75384
00:16:14.958   06:26:31	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:16:14.958   06:26:31	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:16:14.958   06:26:31	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:16:14.958   06:26:31	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:16:14.958   06:26:31	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:16:14.958   06:26:31	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:14.958   06:26:31	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:16:14.958    06:26:31	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:14.958   06:26:31	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:16:14.958  
00:16:14.958  real	0m24.849s
00:16:14.958  user	0m39.244s
00:16:14.958  sys	0m7.307s
00:16:14.958   06:26:31	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:16:14.958   06:26:31	-- common/autotest_common.sh@10 -- # set +x
00:16:14.958  ************************************
00:16:14.958  END TEST nvmf_zcopy
00:16:14.958  ************************************
00:16:14.958   06:26:31	-- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp
00:16:14.958   06:26:31	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:16:14.958   06:26:31	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:16:14.958   06:26:31	-- common/autotest_common.sh@10 -- # set +x
00:16:14.958  ************************************
00:16:14.958  START TEST nvmf_nmic
00:16:14.958  ************************************
00:16:14.958   06:26:31	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp
00:16:14.958  * Looking for test storage...
00:16:14.958  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:16:14.958    06:26:31	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:16:14.958     06:26:31	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:16:14.958     06:26:31	-- common/autotest_common.sh@1690 -- # lcov --version
00:16:15.218    06:26:31	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:16:15.218    06:26:31	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:16:15.218    06:26:31	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:16:15.218    06:26:31	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:16:15.218    06:26:31	-- scripts/common.sh@335 -- # IFS=.-:
00:16:15.218    06:26:31	-- scripts/common.sh@335 -- # read -ra ver1
00:16:15.218    06:26:31	-- scripts/common.sh@336 -- # IFS=.-:
00:16:15.218    06:26:31	-- scripts/common.sh@336 -- # read -ra ver2
00:16:15.218    06:26:31	-- scripts/common.sh@337 -- # local 'op=<'
00:16:15.218    06:26:31	-- scripts/common.sh@339 -- # ver1_l=2
00:16:15.218    06:26:31	-- scripts/common.sh@340 -- # ver2_l=1
00:16:15.218    06:26:31	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:16:15.218    06:26:31	-- scripts/common.sh@343 -- # case "$op" in
00:16:15.218    06:26:31	-- scripts/common.sh@344 -- # : 1
00:16:15.218    06:26:31	-- scripts/common.sh@363 -- # (( v = 0 ))
00:16:15.218    06:26:31	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:15.218     06:26:31	-- scripts/common.sh@364 -- # decimal 1
00:16:15.218     06:26:31	-- scripts/common.sh@352 -- # local d=1
00:16:15.218     06:26:31	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:15.218     06:26:31	-- scripts/common.sh@354 -- # echo 1
00:16:15.218    06:26:31	-- scripts/common.sh@364 -- # ver1[v]=1
00:16:15.218     06:26:31	-- scripts/common.sh@365 -- # decimal 2
00:16:15.218     06:26:31	-- scripts/common.sh@352 -- # local d=2
00:16:15.218     06:26:31	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:15.218     06:26:31	-- scripts/common.sh@354 -- # echo 2
00:16:15.218    06:26:31	-- scripts/common.sh@365 -- # ver2[v]=2
00:16:15.218    06:26:31	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:16:15.218    06:26:31	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:16:15.218    06:26:31	-- scripts/common.sh@367 -- # return 0
00:16:15.218    06:26:31	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:15.218    06:26:31	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:16:15.218  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:15.218  		--rc genhtml_branch_coverage=1
00:16:15.218  		--rc genhtml_function_coverage=1
00:16:15.218  		--rc genhtml_legend=1
00:16:15.218  		--rc geninfo_all_blocks=1
00:16:15.218  		--rc geninfo_unexecuted_blocks=1
00:16:15.218  		
00:16:15.218  		'
00:16:15.218    06:26:31	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:16:15.218  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:15.218  		--rc genhtml_branch_coverage=1
00:16:15.218  		--rc genhtml_function_coverage=1
00:16:15.218  		--rc genhtml_legend=1
00:16:15.218  		--rc geninfo_all_blocks=1
00:16:15.218  		--rc geninfo_unexecuted_blocks=1
00:16:15.218  		
00:16:15.218  		'
00:16:15.218    06:26:31	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:16:15.218  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:15.218  		--rc genhtml_branch_coverage=1
00:16:15.218  		--rc genhtml_function_coverage=1
00:16:15.218  		--rc genhtml_legend=1
00:16:15.218  		--rc geninfo_all_blocks=1
00:16:15.218  		--rc geninfo_unexecuted_blocks=1
00:16:15.218  		
00:16:15.218  		'
00:16:15.218    06:26:31	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:16:15.218  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:15.218  		--rc genhtml_branch_coverage=1
00:16:15.218  		--rc genhtml_function_coverage=1
00:16:15.218  		--rc genhtml_legend=1
00:16:15.218  		--rc geninfo_all_blocks=1
00:16:15.218  		--rc geninfo_unexecuted_blocks=1
00:16:15.218  		
00:16:15.218  		'
00:16:15.218   06:26:31	-- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:16:15.218     06:26:31	-- nvmf/common.sh@7 -- # uname -s
00:16:15.218    06:26:31	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:16:15.218    06:26:31	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:16:15.218    06:26:31	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:16:15.218    06:26:31	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:16:15.218    06:26:31	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:16:15.218    06:26:31	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:16:15.218    06:26:31	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:16:15.218    06:26:31	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:16:15.218    06:26:31	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:16:15.218     06:26:31	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:16:15.218    06:26:32	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:16:15.218    06:26:32	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:16:15.218    06:26:32	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:16:15.218    06:26:32	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:16:15.218    06:26:32	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:16:15.218    06:26:32	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:16:15.218     06:26:32	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:16:15.218     06:26:32	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:16:15.218     06:26:32	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:16:15.218      06:26:32	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:15.218      06:26:32	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:15.218      06:26:32	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:15.218      06:26:32	-- paths/export.sh@5 -- # export PATH
00:16:15.218      06:26:32	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:15.218    06:26:32	-- nvmf/common.sh@46 -- # : 0
00:16:15.218    06:26:32	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:16:15.218    06:26:32	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:16:15.218    06:26:32	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:16:15.218    06:26:32	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:16:15.218    06:26:32	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:16:15.218    06:26:32	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:16:15.218    06:26:32	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:16:15.218    06:26:32	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:16:15.218   06:26:32	-- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64
00:16:15.218   06:26:32	-- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:16:15.218   06:26:32	-- target/nmic.sh@14 -- # nvmftestinit
00:16:15.218   06:26:32	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:16:15.218   06:26:32	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:16:15.218   06:26:32	-- nvmf/common.sh@436 -- # prepare_net_devs
00:16:15.218   06:26:32	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:16:15.218   06:26:32	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:16:15.218   06:26:32	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:15.218   06:26:32	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:16:15.218    06:26:32	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:15.218   06:26:32	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:16:15.219   06:26:32	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:16:15.219   06:26:32	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:16:15.219   06:26:32	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:16:15.219   06:26:32	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:16:15.219   06:26:32	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:16:15.219   06:26:32	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:16:15.219   06:26:32	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:16:15.219   06:26:32	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:16:15.219   06:26:32	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:16:15.219   06:26:32	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:16:15.219   06:26:32	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:16:15.219   06:26:32	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:16:15.219   06:26:32	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:16:15.219   06:26:32	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:16:15.219   06:26:32	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:16:15.219   06:26:32	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:16:15.219   06:26:32	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:16:15.219   06:26:32	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:16:15.219   06:26:32	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:16:15.219  Cannot find device "nvmf_tgt_br"
00:16:15.219   06:26:32	-- nvmf/common.sh@154 -- # true
00:16:15.219   06:26:32	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:16:15.219  Cannot find device "nvmf_tgt_br2"
00:16:15.219   06:26:32	-- nvmf/common.sh@155 -- # true
00:16:15.219   06:26:32	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:16:15.219   06:26:32	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:16:15.219  Cannot find device "nvmf_tgt_br"
00:16:15.219   06:26:32	-- nvmf/common.sh@157 -- # true
00:16:15.219   06:26:32	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:16:15.219  Cannot find device "nvmf_tgt_br2"
00:16:15.219   06:26:32	-- nvmf/common.sh@158 -- # true
00:16:15.219   06:26:32	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:16:15.219   06:26:32	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:16:15.219   06:26:32	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:16:15.219  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:16:15.219   06:26:32	-- nvmf/common.sh@161 -- # true
00:16:15.219   06:26:32	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:16:15.219  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:16:15.219   06:26:32	-- nvmf/common.sh@162 -- # true
00:16:15.219   06:26:32	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:16:15.219   06:26:32	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:16:15.219   06:26:32	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:16:15.219   06:26:32	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:16:15.219   06:26:32	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:16:15.478   06:26:32	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:16:15.478   06:26:32	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:16:15.478   06:26:32	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:16:15.478   06:26:32	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:16:15.478   06:26:32	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:16:15.478   06:26:32	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:16:15.478   06:26:32	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:16:15.478   06:26:32	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:16:15.478   06:26:32	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:16:15.478   06:26:32	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:16:15.478   06:26:32	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:16:15.478   06:26:32	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:16:15.478   06:26:32	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:16:15.478   06:26:32	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:16:15.478   06:26:32	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:16:15.478   06:26:32	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:16:15.478   06:26:32	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:16:15.478   06:26:32	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:16:15.478   06:26:32	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:16:15.478  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:16:15.478  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms
00:16:15.478  
00:16:15.478  --- 10.0.0.2 ping statistics ---
00:16:15.478  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:15.478  rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms
00:16:15.478   06:26:32	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:16:15.478  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:16:15.478  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms
00:16:15.478  
00:16:15.478  --- 10.0.0.3 ping statistics ---
00:16:15.478  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:15.478  rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms
00:16:15.478   06:26:32	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:16:15.478  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:16:15.478  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms
00:16:15.478  
00:16:15.478  --- 10.0.0.1 ping statistics ---
00:16:15.479  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:15.479  rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms
00:16:15.479   06:26:32	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:16:15.479   06:26:32	-- nvmf/common.sh@421 -- # return 0
00:16:15.479   06:26:32	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:16:15.479   06:26:32	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:16:15.479   06:26:32	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:16:15.479   06:26:32	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:16:15.479   06:26:32	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:16:15.479   06:26:32	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:16:15.479   06:26:32	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:16:15.479   06:26:32	-- target/nmic.sh@15 -- # nvmfappstart -m 0xF
00:16:15.479   06:26:32	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:16:15.479   06:26:32	-- common/autotest_common.sh@722 -- # xtrace_disable
00:16:15.479   06:26:32	-- common/autotest_common.sh@10 -- # set +x
00:16:15.479   06:26:32	-- nvmf/common.sh@469 -- # nvmfpid=75879
00:16:15.479   06:26:32	-- nvmf/common.sh@470 -- # waitforlisten 75879
00:16:15.479   06:26:32	-- common/autotest_common.sh@829 -- # '[' -z 75879 ']'
00:16:15.479   06:26:32	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:15.479   06:26:32	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:16:15.479   06:26:32	-- common/autotest_common.sh@834 -- # local max_retries=100
00:16:15.479   06:26:32	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:15.479  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:15.479   06:26:32	-- common/autotest_common.sh@838 -- # xtrace_disable
00:16:15.479   06:26:32	-- common/autotest_common.sh@10 -- # set +x
00:16:15.479  [2024-12-16 06:26:32.422832] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:16:15.479  [2024-12-16 06:26:32.422889] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:16:15.738  [2024-12-16 06:26:32.553898] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:16:15.738  [2024-12-16 06:26:32.641693] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:16:15.738  [2024-12-16 06:26:32.641850] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:16:15.738  [2024-12-16 06:26:32.641862] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:16:15.738  [2024-12-16 06:26:32.641870] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:16:15.738  [2024-12-16 06:26:32.642038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:16:15.738  [2024-12-16 06:26:32.642093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:16:15.738  [2024-12-16 06:26:32.642228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:16:15.738  [2024-12-16 06:26:32.642235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:16:16.673   06:26:33	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:16:16.673   06:26:33	-- common/autotest_common.sh@862 -- # return 0
00:16:16.673   06:26:33	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:16:16.673   06:26:33	-- common/autotest_common.sh@728 -- # xtrace_disable
00:16:16.673   06:26:33	-- common/autotest_common.sh@10 -- # set +x
00:16:16.673   06:26:33	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:16:16.673   06:26:33	-- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:16:16.673   06:26:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:16.673   06:26:33	-- common/autotest_common.sh@10 -- # set +x
00:16:16.673  [2024-12-16 06:26:33.530112] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:16:16.673   06:26:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:16.673   06:26:33	-- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:16:16.673   06:26:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:16.673   06:26:33	-- common/autotest_common.sh@10 -- # set +x
00:16:16.673  Malloc0
00:16:16.673   06:26:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:16.673   06:26:33	-- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:16:16.673   06:26:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:16.673   06:26:33	-- common/autotest_common.sh@10 -- # set +x
00:16:16.673   06:26:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:16.673   06:26:33	-- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:16:16.673   06:26:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:16.673   06:26:33	-- common/autotest_common.sh@10 -- # set +x
00:16:16.673   06:26:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:16.673   06:26:33	-- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:16:16.673   06:26:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:16.673   06:26:33	-- common/autotest_common.sh@10 -- # set +x
00:16:16.673  [2024-12-16 06:26:33.598806] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:16:16.673   06:26:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:16.673   06:26:33	-- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems'
00:16:16.673  test case1: single bdev can't be used in multiple subsystems
00:16:16.673   06:26:33	-- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2
00:16:16.673   06:26:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:16.673   06:26:33	-- common/autotest_common.sh@10 -- # set +x
00:16:16.673   06:26:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:16.673   06:26:33	-- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420
00:16:16.673   06:26:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:16.673   06:26:33	-- common/autotest_common.sh@10 -- # set +x
00:16:16.673   06:26:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:16.673   06:26:33	-- target/nmic.sh@28 -- # nmic_status=0
00:16:16.673   06:26:33	-- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0
00:16:16.673   06:26:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:16.673   06:26:33	-- common/autotest_common.sh@10 -- # set +x
00:16:16.673  [2024-12-16 06:26:33.622601] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target
00:16:16.673  [2024-12-16 06:26:33.622638] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1
00:16:16.673  [2024-12-16 06:26:33.622649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:16:16.673  2024/12/16 06:26:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:16:16.673  request:
00:16:16.673  {
00:16:16.673  "method": "nvmf_subsystem_add_ns",
00:16:16.674  "params": {
00:16:16.674  "nqn": "nqn.2016-06.io.spdk:cnode2",
00:16:16.674  "namespace": {
00:16:16.674  "bdev_name": "Malloc0"
00:16:16.674  }
00:16:16.674  }
00:16:16.674  }
00:16:16.674  Got JSON-RPC error response
00:16:16.674  GoRPCClient: error on JSON-RPC call
00:16:16.674   06:26:33	-- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:16:16.674   06:26:33	-- target/nmic.sh@29 -- # nmic_status=1
00:16:16.674   06:26:33	-- target/nmic.sh@31 -- # '[' 1 -eq 0 ']'
00:16:16.674   Adding namespace failed - expected result.
00:16:16.674   06:26:33	-- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.'
00:16:16.674  test case2: host connect to nvmf target in multiple paths
00:16:16.674   06:26:33	-- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths'
00:16:16.674   06:26:33	-- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421
00:16:16.674   06:26:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:16.674   06:26:33	-- common/autotest_common.sh@10 -- # set +x
00:16:16.674  [2024-12-16 06:26:33.634693] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 ***
00:16:16.674   06:26:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:16.674   06:26:33	-- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:16:16.932   06:26:33	-- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421
00:16:17.190   06:26:33	-- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME
00:16:17.190   06:26:33	-- common/autotest_common.sh@1187 -- # local i=0
00:16:17.190   06:26:33	-- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0
00:16:17.190   06:26:33	-- common/autotest_common.sh@1189 -- # [[ -n '' ]]
00:16:17.190   06:26:33	-- common/autotest_common.sh@1194 -- # sleep 2
00:16:19.090   06:26:35	-- common/autotest_common.sh@1195 -- # (( i++ <= 15 ))
00:16:19.090    06:26:35	-- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL
00:16:19.090    06:26:35	-- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME
00:16:19.090   06:26:36	-- common/autotest_common.sh@1196 -- # nvme_devices=1
00:16:19.090   06:26:36	-- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter ))
00:16:19.090   06:26:36	-- common/autotest_common.sh@1197 -- # return 0
00:16:19.090   06:26:36	-- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v
00:16:19.090  [global]
00:16:19.090  thread=1
00:16:19.090  invalidate=1
00:16:19.090  rw=write
00:16:19.090  time_based=1
00:16:19.090  runtime=1
00:16:19.090  ioengine=libaio
00:16:19.090  direct=1
00:16:19.090  bs=4096
00:16:19.090  iodepth=1
00:16:19.090  norandommap=0
00:16:19.090  numjobs=1
00:16:19.090  
00:16:19.090  verify_dump=1
00:16:19.090  verify_backlog=512
00:16:19.090  verify_state_save=0
00:16:19.090  do_verify=1
00:16:19.090  verify=crc32c-intel
00:16:19.090  [job0]
00:16:19.090  filename=/dev/nvme0n1
00:16:19.090  Could not set queue depth (nvme0n1)
00:16:19.349  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:16:19.349  fio-3.35
00:16:19.349  Starting 1 thread
00:16:20.725  
00:16:20.725  job0: (groupid=0, jobs=1): err= 0: pid=75989: Mon Dec 16 06:26:37 2024
00:16:20.725    read: IOPS=3417, BW=13.3MiB/s (14.0MB/s)(13.4MiB/1001msec)
00:16:20.725      slat (nsec): min=11658, max=62844, avg=14370.56, stdev=3995.20
00:16:20.725      clat (usec): min=111, max=440, avg=143.34, stdev=20.30
00:16:20.725       lat (usec): min=124, max=457, avg=157.71, stdev=21.34
00:16:20.725      clat percentiles (usec):
00:16:20.725       |  1.00th=[  119],  5.00th=[  124], 10.00th=[  127], 20.00th=[  131],
00:16:20.725       | 30.00th=[  135], 40.00th=[  137], 50.00th=[  139], 60.00th=[  143],
00:16:20.725       | 70.00th=[  147], 80.00th=[  155], 90.00th=[  167], 95.00th=[  178],
00:16:20.725       | 99.00th=[  204], 99.50th=[  223], 99.90th=[  392], 99.95th=[  412],
00:16:20.725       | 99.99th=[  441]
00:16:20.725    write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets
00:16:20.725      slat (nsec): min=16683, max=87772, avg=21441.67, stdev=6317.55
00:16:20.725      clat (usec): min=79, max=369, avg=104.04, stdev=17.70
00:16:20.725       lat (usec): min=96, max=413, avg=125.48, stdev=20.10
00:16:20.725      clat percentiles (usec):
00:16:20.725       |  1.00th=[   84],  5.00th=[   88], 10.00th=[   90], 20.00th=[   93],
00:16:20.725       | 30.00th=[   95], 40.00th=[   98], 50.00th=[  100], 60.00th=[  102],
00:16:20.725       | 70.00th=[  106], 80.00th=[  113], 90.00th=[  125], 95.00th=[  137],
00:16:20.725       | 99.00th=[  161], 99.50th=[  174], 99.90th=[  277], 99.95th=[  367],
00:16:20.725       | 99.99th=[  371]
00:16:20.725     bw (  KiB/s): min=15280, max=15280, per=100.00%, avg=15280.00, stdev= 0.00, samples=1
00:16:20.725     iops        : min= 3820, max= 3820, avg=3820.00, stdev= 0.00, samples=1
00:16:20.725    lat (usec)   : 100=26.12%, 250=73.62%, 500=0.26%
00:16:20.725    cpu          : usr=2.40%, sys=9.20%, ctx=7007, majf=0, minf=5
00:16:20.725    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:16:20.725       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:20.725       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:20.725       issued rwts: total=3421,3584,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:20.725       latency   : target=0, window=0, percentile=100.00%, depth=1
00:16:20.725  
00:16:20.725  Run status group 0 (all jobs):
00:16:20.725     READ: bw=13.3MiB/s (14.0MB/s), 13.3MiB/s-13.3MiB/s (14.0MB/s-14.0MB/s), io=13.4MiB (14.0MB), run=1001-1001msec
00:16:20.725    WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec
00:16:20.725  
00:16:20.725  Disk stats (read/write):
00:16:20.725    nvme0n1: ios=3122/3193, merge=0/0, ticks=471/374, in_queue=845, util=91.18%
00:16:20.725   06:26:37	-- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:16:20.725  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s)
00:16:20.725   06:26:37	-- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:16:20.725   06:26:37	-- common/autotest_common.sh@1208 -- # local i=0
00:16:20.725   06:26:37	-- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL
00:16:20.725   06:26:37	-- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME
00:16:20.725   06:26:37	-- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME
00:16:20.725   06:26:37	-- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL
00:16:20.725   06:26:37	-- common/autotest_common.sh@1220 -- # return 0
00:16:20.725   06:26:37	-- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT
00:16:20.725   06:26:37	-- target/nmic.sh@53 -- # nvmftestfini
00:16:20.725   06:26:37	-- nvmf/common.sh@476 -- # nvmfcleanup
00:16:20.725   06:26:37	-- nvmf/common.sh@116 -- # sync
00:16:20.725   06:26:37	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:16:20.725   06:26:37	-- nvmf/common.sh@119 -- # set +e
00:16:20.725   06:26:37	-- nvmf/common.sh@120 -- # for i in {1..20}
00:16:20.725   06:26:37	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:16:20.725  rmmod nvme_tcp
00:16:20.725  rmmod nvme_fabrics
00:16:20.725  rmmod nvme_keyring
00:16:20.725   06:26:37	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:16:20.725   06:26:37	-- nvmf/common.sh@123 -- # set -e
00:16:20.725   06:26:37	-- nvmf/common.sh@124 -- # return 0
00:16:20.725   06:26:37	-- nvmf/common.sh@477 -- # '[' -n 75879 ']'
00:16:20.725   06:26:37	-- nvmf/common.sh@478 -- # killprocess 75879
00:16:20.725   06:26:37	-- common/autotest_common.sh@936 -- # '[' -z 75879 ']'
00:16:20.725   06:26:37	-- common/autotest_common.sh@940 -- # kill -0 75879
00:16:20.725    06:26:37	-- common/autotest_common.sh@941 -- # uname
00:16:20.725   06:26:37	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:16:20.725    06:26:37	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75879
00:16:20.725   06:26:37	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:16:20.725   06:26:37	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:16:20.725  killing process with pid 75879
00:16:20.725   06:26:37	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 75879'
00:16:20.725   06:26:37	-- common/autotest_common.sh@955 -- # kill 75879
00:16:20.725   06:26:37	-- common/autotest_common.sh@960 -- # wait 75879
00:16:20.984   06:26:37	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:16:20.984   06:26:37	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:16:20.984   06:26:37	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:16:20.984   06:26:37	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:16:20.984   06:26:37	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:16:20.984   06:26:37	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:20.984   06:26:37	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:16:20.984    06:26:37	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:20.984   06:26:37	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:16:20.984  ************************************
00:16:20.984  END TEST nvmf_nmic
00:16:20.984  ************************************
00:16:20.984  
00:16:20.984  real	0m6.107s
00:16:20.984  user	0m20.490s
00:16:20.984  sys	0m1.337s
00:16:20.984   06:26:37	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:16:20.984   06:26:37	-- common/autotest_common.sh@10 -- # set +x
00:16:20.984   06:26:37	-- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp
00:16:21.243   06:26:37	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:16:21.243   06:26:37	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:16:21.243   06:26:37	-- common/autotest_common.sh@10 -- # set +x
00:16:21.243  ************************************
00:16:21.243  START TEST nvmf_fio_target
00:16:21.243  ************************************
00:16:21.243   06:26:37	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp
00:16:21.243  * Looking for test storage...
00:16:21.243  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:16:21.243    06:26:38	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:16:21.243     06:26:38	-- common/autotest_common.sh@1690 -- # lcov --version
00:16:21.243     06:26:38	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:16:21.243    06:26:38	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:16:21.243    06:26:38	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:16:21.243    06:26:38	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:16:21.243    06:26:38	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:16:21.243    06:26:38	-- scripts/common.sh@335 -- # IFS=.-:
00:16:21.243    06:26:38	-- scripts/common.sh@335 -- # read -ra ver1
00:16:21.243    06:26:38	-- scripts/common.sh@336 -- # IFS=.-:
00:16:21.243    06:26:38	-- scripts/common.sh@336 -- # read -ra ver2
00:16:21.243    06:26:38	-- scripts/common.sh@337 -- # local 'op=<'
00:16:21.243    06:26:38	-- scripts/common.sh@339 -- # ver1_l=2
00:16:21.243    06:26:38	-- scripts/common.sh@340 -- # ver2_l=1
00:16:21.243    06:26:38	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:16:21.243    06:26:38	-- scripts/common.sh@343 -- # case "$op" in
00:16:21.243    06:26:38	-- scripts/common.sh@344 -- # : 1
00:16:21.243    06:26:38	-- scripts/common.sh@363 -- # (( v = 0 ))
00:16:21.243    06:26:38	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:21.243     06:26:38	-- scripts/common.sh@364 -- # decimal 1
00:16:21.243     06:26:38	-- scripts/common.sh@352 -- # local d=1
00:16:21.243     06:26:38	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:21.243     06:26:38	-- scripts/common.sh@354 -- # echo 1
00:16:21.243    06:26:38	-- scripts/common.sh@364 -- # ver1[v]=1
00:16:21.243     06:26:38	-- scripts/common.sh@365 -- # decimal 2
00:16:21.243     06:26:38	-- scripts/common.sh@352 -- # local d=2
00:16:21.243     06:26:38	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:21.243     06:26:38	-- scripts/common.sh@354 -- # echo 2
00:16:21.243    06:26:38	-- scripts/common.sh@365 -- # ver2[v]=2
00:16:21.243    06:26:38	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:16:21.243    06:26:38	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:16:21.243    06:26:38	-- scripts/common.sh@367 -- # return 0
00:16:21.243    06:26:38	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:21.243    06:26:38	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:16:21.243  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:21.243  		--rc genhtml_branch_coverage=1
00:16:21.243  		--rc genhtml_function_coverage=1
00:16:21.243  		--rc genhtml_legend=1
00:16:21.243  		--rc geninfo_all_blocks=1
00:16:21.244  		--rc geninfo_unexecuted_blocks=1
00:16:21.244  		
00:16:21.244  		'
00:16:21.244    06:26:38	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:16:21.244  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:21.244  		--rc genhtml_branch_coverage=1
00:16:21.244  		--rc genhtml_function_coverage=1
00:16:21.244  		--rc genhtml_legend=1
00:16:21.244  		--rc geninfo_all_blocks=1
00:16:21.244  		--rc geninfo_unexecuted_blocks=1
00:16:21.244  		
00:16:21.244  		'
00:16:21.244    06:26:38	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:16:21.244  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:21.244  		--rc genhtml_branch_coverage=1
00:16:21.244  		--rc genhtml_function_coverage=1
00:16:21.244  		--rc genhtml_legend=1
00:16:21.244  		--rc geninfo_all_blocks=1
00:16:21.244  		--rc geninfo_unexecuted_blocks=1
00:16:21.244  		
00:16:21.244  		'
00:16:21.244    06:26:38	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:16:21.244  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:21.244  		--rc genhtml_branch_coverage=1
00:16:21.244  		--rc genhtml_function_coverage=1
00:16:21.244  		--rc genhtml_legend=1
00:16:21.244  		--rc geninfo_all_blocks=1
00:16:21.244  		--rc geninfo_unexecuted_blocks=1
00:16:21.244  		
00:16:21.244  		'
00:16:21.244   06:26:38	-- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:16:21.244     06:26:38	-- nvmf/common.sh@7 -- # uname -s
00:16:21.244    06:26:38	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:16:21.244    06:26:38	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:16:21.244    06:26:38	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:16:21.244    06:26:38	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:16:21.244    06:26:38	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:16:21.244    06:26:38	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:16:21.244    06:26:38	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:16:21.244    06:26:38	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:16:21.244    06:26:38	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:16:21.244     06:26:38	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:16:21.244    06:26:38	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:16:21.244    06:26:38	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:16:21.244    06:26:38	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:16:21.244    06:26:38	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:16:21.244    06:26:38	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:16:21.244    06:26:38	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:16:21.244     06:26:38	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:16:21.244     06:26:38	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:16:21.244     06:26:38	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:16:21.244      06:26:38	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:21.244      06:26:38	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:21.244      06:26:38	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:21.244      06:26:38	-- paths/export.sh@5 -- # export PATH
00:16:21.244      06:26:38	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:21.244    06:26:38	-- nvmf/common.sh@46 -- # : 0
00:16:21.244    06:26:38	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:16:21.244    06:26:38	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:16:21.244    06:26:38	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:16:21.244    06:26:38	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:16:21.244    06:26:38	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:16:21.244    06:26:38	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:16:21.244    06:26:38	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:16:21.244    06:26:38	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:16:21.244   06:26:38	-- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64
00:16:21.244   06:26:38	-- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:16:21.244   06:26:38	-- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:16:21.244   06:26:38	-- target/fio.sh@16 -- # nvmftestinit
00:16:21.244   06:26:38	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:16:21.244   06:26:38	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:16:21.244   06:26:38	-- nvmf/common.sh@436 -- # prepare_net_devs
00:16:21.244   06:26:38	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:16:21.244   06:26:38	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:16:21.244   06:26:38	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:21.244   06:26:38	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:16:21.244    06:26:38	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:21.244   06:26:38	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:16:21.244   06:26:38	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:16:21.244   06:26:38	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:16:21.244   06:26:38	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:16:21.244   06:26:38	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:16:21.244   06:26:38	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:16:21.244   06:26:38	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:16:21.244   06:26:38	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:16:21.244   06:26:38	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:16:21.244   06:26:38	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:16:21.244   06:26:38	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:16:21.244   06:26:38	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:16:21.244   06:26:38	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:16:21.244   06:26:38	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:16:21.244   06:26:38	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:16:21.244   06:26:38	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:16:21.244   06:26:38	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:16:21.244   06:26:38	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:16:21.244   06:26:38	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:16:21.244   06:26:38	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:16:21.244  Cannot find device "nvmf_tgt_br"
00:16:21.244   06:26:38	-- nvmf/common.sh@154 -- # true
00:16:21.244   06:26:38	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:16:21.244  Cannot find device "nvmf_tgt_br2"
00:16:21.244   06:26:38	-- nvmf/common.sh@155 -- # true
00:16:21.244   06:26:38	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:16:21.503   06:26:38	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:16:21.503  Cannot find device "nvmf_tgt_br"
00:16:21.503   06:26:38	-- nvmf/common.sh@157 -- # true
00:16:21.503   06:26:38	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:16:21.503  Cannot find device "nvmf_tgt_br2"
00:16:21.503   06:26:38	-- nvmf/common.sh@158 -- # true
00:16:21.503   06:26:38	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:16:21.503   06:26:38	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:16:21.503   06:26:38	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:16:21.503  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:16:21.503   06:26:38	-- nvmf/common.sh@161 -- # true
00:16:21.503   06:26:38	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:16:21.503  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:16:21.503   06:26:38	-- nvmf/common.sh@162 -- # true
00:16:21.503   06:26:38	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:16:21.503   06:26:38	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:16:21.503   06:26:38	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:16:21.503   06:26:38	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:16:21.503   06:26:38	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:16:21.503   06:26:38	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:16:21.503   06:26:38	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:16:21.503   06:26:38	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:16:21.503   06:26:38	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:16:21.503   06:26:38	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:16:21.503   06:26:38	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:16:21.503   06:26:38	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:16:21.503   06:26:38	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:16:21.503   06:26:38	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:16:21.503   06:26:38	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:16:21.503   06:26:38	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:16:21.503   06:26:38	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:16:21.503   06:26:38	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:16:21.503   06:26:38	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:16:21.503   06:26:38	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:16:21.503   06:26:38	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:16:21.503   06:26:38	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:16:21.762   06:26:38	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:16:21.762   06:26:38	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:16:21.762  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:16:21.762  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms
00:16:21.762  
00:16:21.762  --- 10.0.0.2 ping statistics ---
00:16:21.762  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:21.762  rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms
00:16:21.762   06:26:38	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:16:21.762  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:16:21.762  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms
00:16:21.762  
00:16:21.762  --- 10.0.0.3 ping statistics ---
00:16:21.762  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:21.762  rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms
00:16:21.762   06:26:38	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:16:21.762  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:16:21.762  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms
00:16:21.762  
00:16:21.762  --- 10.0.0.1 ping statistics ---
00:16:21.762  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:21.762  rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms
00:16:21.762   06:26:38	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:16:21.762   06:26:38	-- nvmf/common.sh@421 -- # return 0
00:16:21.762   06:26:38	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:16:21.762   06:26:38	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:16:21.762   06:26:38	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:16:21.762   06:26:38	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:16:21.762   06:26:38	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:16:21.762   06:26:38	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:16:21.762   06:26:38	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:16:21.762   06:26:38	-- target/fio.sh@17 -- # nvmfappstart -m 0xF
00:16:21.762   06:26:38	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:16:21.762   06:26:38	-- common/autotest_common.sh@722 -- # xtrace_disable
00:16:21.762   06:26:38	-- common/autotest_common.sh@10 -- # set +x
00:16:21.762   06:26:38	-- nvmf/common.sh@469 -- # nvmfpid=76173
00:16:21.762   06:26:38	-- nvmf/common.sh@470 -- # waitforlisten 76173
00:16:21.762   06:26:38	-- common/autotest_common.sh@829 -- # '[' -z 76173 ']'
00:16:21.762   06:26:38	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:21.762   06:26:38	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:16:21.762   06:26:38	-- common/autotest_common.sh@834 -- # local max_retries=100
00:16:21.762  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:21.762   06:26:38	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:21.762   06:26:38	-- common/autotest_common.sh@838 -- # xtrace_disable
00:16:21.762   06:26:38	-- common/autotest_common.sh@10 -- # set +x
00:16:21.762  [2024-12-16 06:26:38.586175] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:16:21.762  [2024-12-16 06:26:38.586259] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:16:21.762  [2024-12-16 06:26:38.725715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:16:22.021  [2024-12-16 06:26:38.816769] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:16:22.021  [2024-12-16 06:26:38.816907] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:16:22.021  [2024-12-16 06:26:38.816917] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:16:22.021  [2024-12-16 06:26:38.816925] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:16:22.021  [2024-12-16 06:26:38.817000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:16:22.021  [2024-12-16 06:26:38.817163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:16:22.021  [2024-12-16 06:26:38.817675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:16:22.021  [2024-12-16 06:26:38.819057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:16:22.588   06:26:39	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:16:22.588   06:26:39	-- common/autotest_common.sh@862 -- # return 0
00:16:22.588   06:26:39	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:16:22.588   06:26:39	-- common/autotest_common.sh@728 -- # xtrace_disable
00:16:22.588   06:26:39	-- common/autotest_common.sh@10 -- # set +x
00:16:22.588   06:26:39	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:16:22.588   06:26:39	-- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:16:22.869  [2024-12-16 06:26:39.806575] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:16:23.128    06:26:39	-- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:16:23.483   06:26:40	-- target/fio.sh@21 -- # malloc_bdevs='Malloc0 '
00:16:23.483    06:26:40	-- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:16:23.483   06:26:40	-- target/fio.sh@22 -- # malloc_bdevs+=Malloc1
00:16:23.483    06:26:40	-- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:16:24.049   06:26:40	-- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 '
00:16:24.049    06:26:40	-- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:16:24.049   06:26:40	-- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3
00:16:24.049   06:26:40	-- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3'
00:16:24.307    06:26:41	-- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:16:24.566   06:26:41	-- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 '
00:16:24.566    06:26:41	-- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:16:24.825   06:26:41	-- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 '
00:16:24.825    06:26:41	-- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:16:25.084   06:26:41	-- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6
00:16:25.084   06:26:41	-- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6'
00:16:25.344   06:26:42	-- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:16:25.603   06:26:42	-- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs
00:16:25.603   06:26:42	-- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:16:25.862   06:26:42	-- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs
00:16:25.862   06:26:42	-- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:16:26.121   06:26:42	-- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:16:26.380  [2024-12-16 06:26:43.110226] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:16:26.380   06:26:43	-- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0
00:16:26.380   06:26:43	-- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0
00:16:26.638   06:26:43	-- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:16:26.897   06:26:43	-- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4
00:16:26.897   06:26:43	-- common/autotest_common.sh@1187 -- # local i=0
00:16:26.897   06:26:43	-- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0
00:16:26.897   06:26:43	-- common/autotest_common.sh@1189 -- # [[ -n 4 ]]
00:16:26.897   06:26:43	-- common/autotest_common.sh@1190 -- # nvme_device_counter=4
00:16:26.897   06:26:43	-- common/autotest_common.sh@1194 -- # sleep 2
00:16:28.804   06:26:45	-- common/autotest_common.sh@1195 -- # (( i++ <= 15 ))
00:16:28.804    06:26:45	-- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL
00:16:28.804    06:26:45	-- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME
00:16:28.804   06:26:45	-- common/autotest_common.sh@1196 -- # nvme_devices=4
00:16:28.804   06:26:45	-- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter ))
00:16:28.804   06:26:45	-- common/autotest_common.sh@1197 -- # return 0
00:16:28.804   06:26:45	-- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v
00:16:28.804  [global]
00:16:28.804  thread=1
00:16:28.804  invalidate=1
00:16:28.804  rw=write
00:16:28.804  time_based=1
00:16:28.804  runtime=1
00:16:28.804  ioengine=libaio
00:16:28.804  direct=1
00:16:28.804  bs=4096
00:16:28.804  iodepth=1
00:16:28.804  norandommap=0
00:16:28.804  numjobs=1
00:16:28.804  
00:16:28.804  verify_dump=1
00:16:28.804  verify_backlog=512
00:16:28.804  verify_state_save=0
00:16:28.804  do_verify=1
00:16:28.804  verify=crc32c-intel
00:16:28.804  [job0]
00:16:28.804  filename=/dev/nvme0n1
00:16:28.804  [job1]
00:16:28.804  filename=/dev/nvme0n2
00:16:28.804  [job2]
00:16:28.804  filename=/dev/nvme0n3
00:16:28.804  [job3]
00:16:28.804  filename=/dev/nvme0n4
00:16:29.064  Could not set queue depth (nvme0n1)
00:16:29.064  Could not set queue depth (nvme0n2)
00:16:29.064  Could not set queue depth (nvme0n3)
00:16:29.064  Could not set queue depth (nvme0n4)
00:16:29.064  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:16:29.064  job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:16:29.064  job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:16:29.064  job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:16:29.064  fio-3.35
00:16:29.064  Starting 4 threads
00:16:30.443  
00:16:30.443  job0: (groupid=0, jobs=1): err= 0: pid=76470: Mon Dec 16 06:26:47 2024
00:16:30.443    read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec)
00:16:30.443      slat (nsec): min=16695, max=89229, avg=23767.43, stdev=8668.22
00:16:30.443      clat (usec): min=129, max=2405, avg=298.23, stdev=129.05
00:16:30.443       lat (usec): min=148, max=2425, avg=322.00, stdev=131.83
00:16:30.443      clat percentiles (usec):
00:16:30.443       |  1.00th=[  139],  5.00th=[  159], 10.00th=[  174], 20.00th=[  192],
00:16:30.443       | 30.00th=[  210], 40.00th=[  243], 50.00th=[  289], 60.00th=[  322],
00:16:30.443       | 70.00th=[  359], 80.00th=[  396], 90.00th=[  433], 95.00th=[  465],
00:16:30.443       | 99.00th=[  619], 99.50th=[  668], 99.90th=[ 2147], 99.95th=[ 2409],
00:16:30.443       | 99.99th=[ 2409]
00:16:30.443    write: IOPS=1919, BW=7676KiB/s (7861kB/s)(7684KiB/1001msec); 0 zone resets
00:16:30.443      slat (usec): min=24, max=110, avg=33.98, stdev= 9.75
00:16:30.443      clat (usec): min=97, max=483, avg=224.86, stdev=77.85
00:16:30.443       lat (usec): min=123, max=524, avg=258.85, stdev=83.04
00:16:30.443      clat percentiles (usec):
00:16:30.443       |  1.00th=[  115],  5.00th=[  137], 10.00th=[  145], 20.00th=[  155],
00:16:30.443       | 30.00th=[  167], 40.00th=[  184], 50.00th=[  210], 60.00th=[  233],
00:16:30.443       | 70.00th=[  258], 80.00th=[  285], 90.00th=[  343], 95.00th=[  383],
00:16:30.443       | 99.00th=[  433], 99.50th=[  449], 99.90th=[  482], 99.95th=[  482],
00:16:30.443       | 99.99th=[  482]
00:16:30.443     bw (  KiB/s): min= 6688, max= 6688, per=20.75%, avg=6688.00, stdev= 0.00, samples=1
00:16:30.443     iops        : min= 1672, max= 1672, avg=1672.00, stdev= 0.00, samples=1
00:16:30.443    lat (usec)   : 100=0.03%, 250=55.66%, 500=42.96%, 750=1.27%, 1000=0.03%
00:16:30.443    lat (msec)   : 4=0.06%
00:16:30.443    cpu          : usr=2.40%, sys=7.00%, ctx=3457, majf=0, minf=15
00:16:30.443    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:16:30.443       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:30.443       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:30.443       issued rwts: total=1536,1921,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:30.443       latency   : target=0, window=0, percentile=100.00%, depth=1
00:16:30.443  job1: (groupid=0, jobs=1): err= 0: pid=76475: Mon Dec 16 06:26:47 2024
00:16:30.443    read: IOPS=1785, BW=7141KiB/s (7312kB/s)(7148KiB/1001msec)
00:16:30.443      slat (nsec): min=9719, max=55942, avg=15827.87, stdev=5074.83
00:16:30.443      clat (usec): min=151, max=4151, avg=259.38, stdev=120.14
00:16:30.443       lat (usec): min=169, max=4164, avg=275.21, stdev=119.87
00:16:30.443      clat percentiles (usec):
00:16:30.443       |  1.00th=[  165],  5.00th=[  182], 10.00th=[  194], 20.00th=[  208],
00:16:30.443       | 30.00th=[  217], 40.00th=[  225], 50.00th=[  235], 60.00th=[  245],
00:16:30.443       | 70.00th=[  258], 80.00th=[  281], 90.00th=[  383], 95.00th=[  424],
00:16:30.443       | 99.00th=[  523], 99.50th=[  586], 99.90th=[  709], 99.95th=[ 4146],
00:16:30.443       | 99.99th=[ 4146]
00:16:30.443    write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets
00:16:30.443      slat (usec): min=9, max=100, avg=23.38, stdev= 7.67
00:16:30.443      clat (usec): min=122, max=715, avg=221.66, stdev=68.96
00:16:30.443       lat (usec): min=145, max=732, avg=245.04, stdev=67.08
00:16:30.443      clat percentiles (usec):
00:16:30.443       |  1.00th=[  137],  5.00th=[  147], 10.00th=[  157], 20.00th=[  169],
00:16:30.443       | 30.00th=[  180], 40.00th=[  190], 50.00th=[  200], 60.00th=[  212],
00:16:30.443       | 70.00th=[  229], 80.00th=[  273], 90.00th=[  330], 95.00th=[  363],
00:16:30.443       | 99.00th=[  429], 99.50th=[  461], 99.90th=[  545], 99.95th=[  562],
00:16:30.443       | 99.99th=[  717]
00:16:30.443     bw (  KiB/s): min= 9368, max= 9368, per=29.07%, avg=9368.00, stdev= 0.00, samples=1
00:16:30.443     iops        : min= 2342, max= 2342, avg=2342.00, stdev= 0.00, samples=1
00:16:30.443    lat (usec)   : 250=70.43%, 500=28.68%, 750=0.86%
00:16:30.444    lat (msec)   : 10=0.03%
00:16:30.444    cpu          : usr=1.30%, sys=5.90%, ctx=3835, majf=0, minf=5
00:16:30.444    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:16:30.444       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:30.444       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:30.444       issued rwts: total=1787,2048,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:30.444       latency   : target=0, window=0, percentile=100.00%, depth=1
00:16:30.444  job2: (groupid=0, jobs=1): err= 0: pid=76477: Mon Dec 16 06:26:47 2024
00:16:30.444    read: IOPS=2095, BW=8384KiB/s (8585kB/s)(8392KiB/1001msec)
00:16:30.444      slat (nsec): min=12788, max=75029, avg=16832.48, stdev=5340.33
00:16:30.444      clat (usec): min=134, max=4341, avg=204.18, stdev=99.65
00:16:30.444       lat (usec): min=147, max=4359, avg=221.01, stdev=99.83
00:16:30.444      clat percentiles (usec):
00:16:30.444       |  1.00th=[  145],  5.00th=[  157], 10.00th=[  165], 20.00th=[  178],
00:16:30.444       | 30.00th=[  188], 40.00th=[  194], 50.00th=[  200], 60.00th=[  208],
00:16:30.444       | 70.00th=[  217], 80.00th=[  225], 90.00th=[  237], 95.00th=[  247],
00:16:30.444       | 99.00th=[  277], 99.50th=[  289], 99.90th=[  938], 99.95th=[ 1434],
00:16:30.444       | 99.99th=[ 4359]
00:16:30.444    write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets
00:16:30.444      slat (usec): min=18, max=113, avg=24.37, stdev= 6.97
00:16:30.444      clat (usec): min=97, max=29597, avg=182.11, stdev=585.59
00:16:30.444       lat (usec): min=117, max=29617, avg=206.48, stdev=585.58
00:16:30.444      clat percentiles (usec):
00:16:30.444       |  1.00th=[  109],  5.00th=[  120], 10.00th=[  130], 20.00th=[  147],
00:16:30.444       | 30.00th=[  155], 40.00th=[  163], 50.00th=[  169], 60.00th=[  178],
00:16:30.444       | 70.00th=[  184], 80.00th=[  192], 90.00th=[  204], 95.00th=[  217],
00:16:30.444       | 99.00th=[  241], 99.50th=[  260], 99.90th=[ 2245], 99.95th=[ 2474],
00:16:30.444       | 99.99th=[29492]
00:16:30.444     bw (  KiB/s): min=10600, max=10600, per=32.89%, avg=10600.00, stdev= 0.00, samples=1
00:16:30.444     iops        : min= 2650, max= 2650, avg=2650.00, stdev= 0.00, samples=1
00:16:30.444    lat (usec)   : 100=0.06%, 250=97.77%, 500=2.02%, 750=0.02%, 1000=0.02%
00:16:30.444    lat (msec)   : 2=0.02%, 4=0.04%, 10=0.02%, 50=0.02%
00:16:30.444    cpu          : usr=2.00%, sys=6.70%, ctx=4659, majf=0, minf=9
00:16:30.444    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:16:30.444       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:30.444       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:30.444       issued rwts: total=2098,2560,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:30.444       latency   : target=0, window=0, percentile=100.00%, depth=1
00:16:30.444  job3: (groupid=0, jobs=1): err= 0: pid=76478: Mon Dec 16 06:26:47 2024
00:16:30.444    read: IOPS=1258, BW=5035KiB/s (5156kB/s)(5040KiB/1001msec)
00:16:30.444      slat (nsec): min=8245, max=90393, avg=22860.15, stdev=9550.06
00:16:30.444      clat (usec): min=153, max=4083, avg=368.11, stdev=128.97
00:16:30.444       lat (usec): min=173, max=4101, avg=390.97, stdev=130.01
00:16:30.444      clat percentiles (usec):
00:16:30.444       |  1.00th=[  221],  5.00th=[  255], 10.00th=[  273], 20.00th=[  297],
00:16:30.444       | 30.00th=[  322], 40.00th=[  347], 50.00th=[  363], 60.00th=[  379],
00:16:30.444       | 70.00th=[  400], 80.00th=[  420], 90.00th=[  453], 95.00th=[  482],
00:16:30.444       | 99.00th=[  611], 99.50th=[  644], 99.90th=[  766], 99.95th=[ 4080],
00:16:30.444       | 99.99th=[ 4080]
00:16:30.444    write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets
00:16:30.444      slat (usec): min=9, max=104, avg=33.28, stdev=13.09
00:16:30.444      clat (usec): min=152, max=603, avg=292.40, stdev=63.17
00:16:30.444       lat (usec): min=178, max=621, avg=325.68, stdev=62.14
00:16:30.444      clat percentiles (usec):
00:16:30.444       |  1.00th=[  178],  5.00th=[  204], 10.00th=[  221], 20.00th=[  237],
00:16:30.444       | 30.00th=[  253], 40.00th=[  269], 50.00th=[  285], 60.00th=[  302],
00:16:30.444       | 70.00th=[  322], 80.00th=[  347], 90.00th=[  379], 95.00th=[  404],
00:16:30.444       | 99.00th=[  449], 99.50th=[  469], 99.90th=[  515], 99.95th=[  603],
00:16:30.444       | 99.99th=[  603]
00:16:30.444     bw (  KiB/s): min= 6760, max= 6760, per=20.98%, avg=6760.00, stdev= 0.00, samples=1
00:16:30.444     iops        : min= 1690, max= 1690, avg=1690.00, stdev= 0.00, samples=1
00:16:30.444    lat (usec)   : 250=17.42%, 500=80.83%, 750=1.68%, 1000=0.04%
00:16:30.444    lat (msec)   : 10=0.04%
00:16:30.444    cpu          : usr=1.50%, sys=6.10%, ctx=2806, majf=0, minf=7
00:16:30.444    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:16:30.444       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:30.444       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:30.444       issued rwts: total=1260,1536,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:30.444       latency   : target=0, window=0, percentile=100.00%, depth=1
00:16:30.444  
00:16:30.444  Run status group 0 (all jobs):
00:16:30.444     READ: bw=26.1MiB/s (27.3MB/s), 5035KiB/s-8384KiB/s (5156kB/s-8585kB/s), io=26.1MiB (27.4MB), run=1001-1001msec
00:16:30.444    WRITE: bw=31.5MiB/s (33.0MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=31.5MiB (33.0MB), run=1001-1001msec
00:16:30.444  
00:16:30.444  Disk stats (read/write):
00:16:30.444    nvme0n1: ios=1278/1536, merge=0/0, ticks=422/391, in_queue=813, util=87.98%
00:16:30.444    nvme0n2: ios=1583/1938, merge=0/0, ticks=452/440, in_queue=892, util=92.91%
00:16:30.444    nvme0n3: ios=1901/2048, merge=0/0, ticks=395/391, in_queue=786, util=88.50%
00:16:30.444    nvme0n4: ios=1074/1405, merge=0/0, ticks=443/426, in_queue=869, util=92.39%
00:16:30.444   06:26:47	-- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v
00:16:30.444  [global]
00:16:30.444  thread=1
00:16:30.444  invalidate=1
00:16:30.444  rw=randwrite
00:16:30.444  time_based=1
00:16:30.444  runtime=1
00:16:30.444  ioengine=libaio
00:16:30.444  direct=1
00:16:30.444  bs=4096
00:16:30.444  iodepth=1
00:16:30.444  norandommap=0
00:16:30.444  numjobs=1
00:16:30.444  
00:16:30.444  verify_dump=1
00:16:30.444  verify_backlog=512
00:16:30.444  verify_state_save=0
00:16:30.444  do_verify=1
00:16:30.444  verify=crc32c-intel
00:16:30.444  [job0]
00:16:30.444  filename=/dev/nvme0n1
00:16:30.444  [job1]
00:16:30.444  filename=/dev/nvme0n2
00:16:30.444  [job2]
00:16:30.444  filename=/dev/nvme0n3
00:16:30.444  [job3]
00:16:30.444  filename=/dev/nvme0n4
00:16:30.444  Could not set queue depth (nvme0n1)
00:16:30.444  Could not set queue depth (nvme0n2)
00:16:30.444  Could not set queue depth (nvme0n3)
00:16:30.444  Could not set queue depth (nvme0n4)
00:16:30.444  job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:16:30.444  job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:16:30.444  job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:16:30.444  job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:16:30.444  fio-3.35
00:16:30.444  Starting 4 threads
00:16:31.894  
00:16:31.894  job0: (groupid=0, jobs=1): err= 0: pid=76531: Mon Dec 16 06:26:48 2024
00:16:31.894    read: IOPS=1061, BW=4248KiB/s (4350kB/s)(4252KiB/1001msec)
00:16:31.894      slat (nsec): min=15897, max=90257, avg=29475.17, stdev=9222.92
00:16:31.894      clat (usec): min=232, max=695, avg=388.88, stdev=44.83
00:16:31.894       lat (usec): min=262, max=742, avg=418.36, stdev=45.19
00:16:31.894      clat percentiles (usec):
00:16:31.894       |  1.00th=[  310],  5.00th=[  330], 10.00th=[  343], 20.00th=[  355],
00:16:31.894       | 30.00th=[  363], 40.00th=[  375], 50.00th=[  383], 60.00th=[  392],
00:16:31.894       | 70.00th=[  408], 80.00th=[  424], 90.00th=[  445], 95.00th=[  465],
00:16:31.894       | 99.00th=[  537], 99.50th=[  562], 99.90th=[  586], 99.95th=[  693],
00:16:31.894       | 99.99th=[  693]
00:16:31.894    write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets
00:16:31.894      slat (nsec): min=25447, max=90312, avg=39647.82, stdev=7661.94
00:16:31.894      clat (usec): min=153, max=1234, avg=316.50, stdev=57.55
00:16:31.894       lat (usec): min=197, max=1269, avg=356.15, stdev=57.41
00:16:31.894      clat percentiles (usec):
00:16:31.894       |  1.00th=[  221],  5.00th=[  249], 10.00th=[  262], 20.00th=[  273],
00:16:31.894       | 30.00th=[  285], 40.00th=[  293], 50.00th=[  306], 60.00th=[  314],
00:16:31.894       | 70.00th=[  330], 80.00th=[  367], 90.00th=[  396], 95.00th=[  412],
00:16:31.894       | 99.00th=[  461], 99.50th=[  490], 99.90th=[  529], 99.95th=[ 1237],
00:16:31.894       | 99.99th=[ 1237]
00:16:31.894     bw (  KiB/s): min= 5968, max= 5968, per=24.31%, avg=5968.00, stdev= 0.00, samples=1
00:16:31.894     iops        : min= 1492, max= 1492, avg=1492.00, stdev= 0.00, samples=1
00:16:31.894    lat (usec)   : 250=3.12%, 500=95.84%, 750=1.00%
00:16:31.894    lat (msec)   : 2=0.04%
00:16:31.894    cpu          : usr=2.30%, sys=6.60%, ctx=2600, majf=0, minf=11
00:16:31.894    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:16:31.894       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:31.894       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:31.894       issued rwts: total=1063,1536,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:31.894       latency   : target=0, window=0, percentile=100.00%, depth=1
00:16:31.894  job1: (groupid=0, jobs=1): err= 0: pid=76532: Mon Dec 16 06:26:48 2024
00:16:31.894    read: IOPS=1042, BW=4172KiB/s (4272kB/s)(4176KiB/1001msec)
00:16:31.894      slat (nsec): min=15636, max=57259, avg=21024.30, stdev=6436.27
00:16:31.894      clat (usec): min=288, max=930, avg=403.35, stdev=52.35
00:16:31.894       lat (usec): min=341, max=949, avg=424.37, stdev=53.82
00:16:31.894      clat percentiles (usec):
00:16:31.894       |  1.00th=[  334],  5.00th=[  351], 10.00th=[  359], 20.00th=[  367],
00:16:31.894       | 30.00th=[  375], 40.00th=[  383], 50.00th=[  392], 60.00th=[  404],
00:16:31.894       | 70.00th=[  420], 80.00th=[  437], 90.00th=[  457], 95.00th=[  482],
00:16:31.894       | 99.00th=[  603], 99.50th=[  676], 99.90th=[  783], 99.95th=[  930],
00:16:31.894       | 99.99th=[  930]
00:16:31.894    write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets
00:16:31.894      slat (usec): min=25, max=110, avg=41.77, stdev= 9.41
00:16:31.894      clat (usec): min=158, max=1268, avg=316.15, stdev=62.70
00:16:31.894       lat (usec): min=196, max=1346, avg=357.92, stdev=61.94
00:16:31.894      clat percentiles (usec):
00:16:31.894       |  1.00th=[  231],  5.00th=[  249], 10.00th=[  260], 20.00th=[  273],
00:16:31.894       | 30.00th=[  281], 40.00th=[  289], 50.00th=[  297], 60.00th=[  314],
00:16:31.894       | 70.00th=[  334], 80.00th=[  371], 90.00th=[  396], 95.00th=[  416],
00:16:31.894       | 99.00th=[  449], 99.50th=[  498], 99.90th=[ 1156], 99.95th=[ 1270],
00:16:31.894       | 99.99th=[ 1270]
00:16:31.894     bw (  KiB/s): min= 5928, max= 5928, per=24.15%, avg=5928.00, stdev= 0.00, samples=1
00:16:31.894     iops        : min= 1482, max= 1482, avg=1482.00, stdev= 0.00, samples=1
00:16:31.894    lat (usec)   : 250=2.98%, 500=95.31%, 750=1.55%, 1000=0.08%
00:16:31.894    lat (msec)   : 2=0.08%
00:16:31.894    cpu          : usr=1.30%, sys=6.80%, ctx=2582, majf=0, minf=11
00:16:31.894    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:16:31.894       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:31.894       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:31.894       issued rwts: total=1044,1536,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:31.894       latency   : target=0, window=0, percentile=100.00%, depth=1
00:16:31.894  job2: (groupid=0, jobs=1): err= 0: pid=76533: Mon Dec 16 06:26:48 2024
00:16:31.894    read: IOPS=1480, BW=5922KiB/s (6064kB/s)(5928KiB/1001msec)
00:16:31.894      slat (nsec): min=17273, max=66004, avg=26230.29, stdev=6016.37
00:16:31.894      clat (usec): min=153, max=4099, avg=332.00, stdev=105.10
00:16:31.894       lat (usec): min=174, max=4129, avg=358.23, stdev=105.06
00:16:31.894      clat percentiles (usec):
00:16:31.894       |  1.00th=[  265],  5.00th=[  285], 10.00th=[  293], 20.00th=[  306],
00:16:31.894       | 30.00th=[  314], 40.00th=[  322], 50.00th=[  326], 60.00th=[  334],
00:16:31.894       | 70.00th=[  343], 80.00th=[  351], 90.00th=[  367], 95.00th=[  383],
00:16:31.894       | 99.00th=[  429], 99.50th=[  465], 99.90th=[  938], 99.95th=[ 4113],
00:16:31.894       | 99.99th=[ 4113]
00:16:31.894    write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets
00:16:31.894      slat (nsec): min=26791, max=98973, avg=37485.43, stdev=9112.62
00:16:31.894      clat (usec): min=131, max=546, avg=262.38, stdev=45.08
00:16:31.894       lat (usec): min=161, max=589, avg=299.87, stdev=46.49
00:16:31.894      clat percentiles (usec):
00:16:31.894       |  1.00th=[  172],  5.00th=[  206], 10.00th=[  215], 20.00th=[  227],
00:16:31.894       | 30.00th=[  239], 40.00th=[  249], 50.00th=[  258], 60.00th=[  269],
00:16:31.894       | 70.00th=[  277], 80.00th=[  289], 90.00th=[  314], 95.00th=[  338],
00:16:31.894       | 99.00th=[  396], 99.50th=[  461], 99.90th=[  545], 99.95th=[  545],
00:16:31.894       | 99.99th=[  545]
00:16:31.894     bw (  KiB/s): min= 8136, max= 8136, per=33.14%, avg=8136.00, stdev= 0.00, samples=1
00:16:31.894     iops        : min= 2034, max= 2034, avg=2034.00, stdev= 0.00, samples=1
00:16:31.894    lat (usec)   : 250=21.47%, 500=78.23%, 750=0.23%, 1000=0.03%
00:16:31.895    lat (msec)   : 10=0.03%
00:16:31.895    cpu          : usr=2.00%, sys=6.90%, ctx=3035, majf=0, minf=17
00:16:31.895    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:16:31.895       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:31.895       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:31.895       issued rwts: total=1482,1536,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:31.895       latency   : target=0, window=0, percentile=100.00%, depth=1
00:16:31.895  job3: (groupid=0, jobs=1): err= 0: pid=76534: Mon Dec 16 06:26:48 2024
00:16:31.895    read: IOPS=1480, BW=5922KiB/s (6064kB/s)(5928KiB/1001msec)
00:16:31.895      slat (usec): min=17, max=133, avg=22.17, stdev= 8.71
00:16:31.895      clat (usec): min=175, max=627, avg=333.55, stdev=37.05
00:16:31.895       lat (usec): min=197, max=657, avg=355.71, stdev=36.51
00:16:31.895      clat percentiles (usec):
00:16:31.895       |  1.00th=[  202],  5.00th=[  289], 10.00th=[  302], 20.00th=[  310],
00:16:31.895       | 30.00th=[  318], 40.00th=[  326], 50.00th=[  330], 60.00th=[  338],
00:16:31.895       | 70.00th=[  347], 80.00th=[  359], 90.00th=[  375], 95.00th=[  396],
00:16:31.895       | 99.00th=[  437], 99.50th=[  461], 99.90th=[  578], 99.95th=[  627],
00:16:31.895       | 99.99th=[  627]
00:16:31.895    write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets
00:16:31.895      slat (nsec): min=26410, max=86736, avg=37392.37, stdev=8464.60
00:16:31.895      clat (usec): min=140, max=2935, avg=265.62, stdev=81.10
00:16:31.895       lat (usec): min=174, max=2970, avg=303.02, stdev=81.81
00:16:31.895      clat percentiles (usec):
00:16:31.895       |  1.00th=[  188],  5.00th=[  208], 10.00th=[  217], 20.00th=[  229],
00:16:31.895       | 30.00th=[  239], 40.00th=[  251], 50.00th=[  262], 60.00th=[  269],
00:16:31.895       | 70.00th=[  281], 80.00th=[  289], 90.00th=[  314], 95.00th=[  351],
00:16:31.895       | 99.00th=[  404], 99.50th=[  429], 99.90th=[  570], 99.95th=[ 2933],
00:16:31.895       | 99.99th=[ 2933]
00:16:31.895     bw (  KiB/s): min= 7976, max= 7976, per=32.49%, avg=7976.00, stdev= 0.00, samples=1
00:16:31.895     iops        : min= 1994, max= 1994, avg=1994.00, stdev= 0.00, samples=1
00:16:31.895    lat (usec)   : 250=21.27%, 500=78.56%, 750=0.13%
00:16:31.895    lat (msec)   : 4=0.03%
00:16:31.895    cpu          : usr=1.30%, sys=6.90%, ctx=3027, majf=0, minf=9
00:16:31.895    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:16:31.895       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:31.895       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:31.895       issued rwts: total=1482,1536,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:31.895       latency   : target=0, window=0, percentile=100.00%, depth=1
00:16:31.895  
00:16:31.895  Run status group 0 (all jobs):
00:16:31.895     READ: bw=19.8MiB/s (20.8MB/s), 4172KiB/s-5922KiB/s (4272kB/s-6064kB/s), io=19.8MiB (20.8MB), run=1001-1001msec
00:16:31.895    WRITE: bw=24.0MiB/s (25.1MB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=24.0MiB (25.2MB), run=1001-1001msec
00:16:31.895  
00:16:31.895  Disk stats (read/write):
00:16:31.895    nvme0n1: ios=1074/1172, merge=0/0, ticks=444/395, in_queue=839, util=88.08%
00:16:31.895    nvme0n2: ios=1067/1154, merge=0/0, ticks=461/384, in_queue=845, util=89.15%
00:16:31.895    nvme0n3: ios=1091/1536, merge=0/0, ticks=371/425, in_queue=796, util=89.25%
00:16:31.895    nvme0n4: ios=1091/1536, merge=0/0, ticks=378/434, in_queue=812, util=89.71%
00:16:31.895   06:26:48	-- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v
00:16:31.895  [global]
00:16:31.895  thread=1
00:16:31.895  invalidate=1
00:16:31.895  rw=write
00:16:31.895  time_based=1
00:16:31.895  runtime=1
00:16:31.895  ioengine=libaio
00:16:31.895  direct=1
00:16:31.895  bs=4096
00:16:31.895  iodepth=128
00:16:31.895  norandommap=0
00:16:31.895  numjobs=1
00:16:31.895  
00:16:31.895  verify_dump=1
00:16:31.895  verify_backlog=512
00:16:31.895  verify_state_save=0
00:16:31.895  do_verify=1
00:16:31.895  verify=crc32c-intel
00:16:31.895  [job0]
00:16:31.895  filename=/dev/nvme0n1
00:16:31.895  [job1]
00:16:31.895  filename=/dev/nvme0n2
00:16:31.895  [job2]
00:16:31.895  filename=/dev/nvme0n3
00:16:31.895  [job3]
00:16:31.895  filename=/dev/nvme0n4
00:16:31.895  Could not set queue depth (nvme0n1)
00:16:31.895  Could not set queue depth (nvme0n2)
00:16:31.895  Could not set queue depth (nvme0n3)
00:16:31.895  Could not set queue depth (nvme0n4)
00:16:31.895  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:16:31.895  job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:16:31.895  job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:16:31.895  job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:16:31.895  fio-3.35
00:16:31.895  Starting 4 threads
00:16:33.286  
00:16:33.286  job0: (groupid=0, jobs=1): err= 0: pid=76589: Mon Dec 16 06:26:49 2024
00:16:33.286    read: IOPS=4039, BW=15.8MiB/s (16.5MB/s)(15.8MiB/1002msec)
00:16:33.286      slat (usec): min=7, max=5913, avg=119.84, stdev=629.68
00:16:33.286      clat (usec): min=619, max=23297, avg=15630.50, stdev=3297.93
00:16:33.286       lat (usec): min=1705, max=25373, avg=15750.34, stdev=3313.32
00:16:33.286      clat percentiles (usec):
00:16:33.286       |  1.00th=[ 7177],  5.00th=[ 9896], 10.00th=[10421], 20.00th=[12125],
00:16:33.286       | 30.00th=[14091], 40.00th=[16319], 50.00th=[16909], 60.00th=[17171],
00:16:33.286       | 70.00th=[17695], 80.00th=[17957], 90.00th=[18744], 95.00th=[19792],
00:16:33.286       | 99.00th=[21627], 99.50th=[21890], 99.90th=[23200], 99.95th=[23200],
00:16:33.286       | 99.99th=[23200]
00:16:33.286    write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets
00:16:33.286      slat (usec): min=9, max=5976, avg=117.20, stdev=584.02
00:16:33.286      clat (usec): min=7919, max=24162, avg=15468.17, stdev=3808.70
00:16:33.286       lat (usec): min=8003, max=24205, avg=15585.36, stdev=3814.37
00:16:33.286      clat percentiles (usec):
00:16:33.286       |  1.00th=[ 8979],  5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[11338],
00:16:33.286       | 30.00th=[12125], 40.00th=[14222], 50.00th=[16909], 60.00th=[17957],
00:16:33.286       | 70.00th=[18482], 80.00th=[19006], 90.00th=[19792], 95.00th=[20317],
00:16:33.286       | 99.00th=[20841], 99.50th=[20841], 99.90th=[21627], 99.95th=[22676],
00:16:33.286       | 99.99th=[24249]
00:16:33.286     bw (  KiB/s): min=14136, max=18632, per=29.79%, avg=16384.00, stdev=3179.15, samples=2
00:16:33.286     iops        : min= 3534, max= 4658, avg=4096.00, stdev=794.79, samples=2
00:16:33.286    lat (usec)   : 750=0.01%
00:16:33.286    lat (msec)   : 2=0.07%, 10=9.26%, 20=85.65%, 50=5.01%
00:16:33.286    cpu          : usr=4.60%, sys=11.29%, ctx=496, majf=0, minf=16
00:16:33.286    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2%
00:16:33.286       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:33.286       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:16:33.286       issued rwts: total=4048,4096,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:33.286       latency   : target=0, window=0, percentile=100.00%, depth=128
00:16:33.286  job1: (groupid=0, jobs=1): err= 0: pid=76590: Mon Dec 16 06:26:49 2024
00:16:33.286    read: IOPS=2135, BW=8542KiB/s (8747kB/s)(8568KiB/1003msec)
00:16:33.286      slat (usec): min=4, max=8510, avg=231.27, stdev=959.85
00:16:33.286      clat (usec): min=2629, max=44270, avg=29256.76, stdev=7979.03
00:16:33.286       lat (usec): min=2643, max=44285, avg=29488.03, stdev=7993.61
00:16:33.286      clat percentiles (usec):
00:16:33.286       |  1.00th=[ 6652],  5.00th=[18744], 10.00th=[20841], 20.00th=[21890],
00:16:33.286       | 30.00th=[24249], 40.00th=[27132], 50.00th=[29230], 60.00th=[30016],
00:16:33.286       | 70.00th=[32113], 80.00th=[36963], 90.00th=[41157], 95.00th=[42730],
00:16:33.286       | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303],
00:16:33.286       | 99.99th=[44303]
00:16:33.286    write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets
00:16:33.286      slat (usec): min=15, max=5606, avg=187.99, stdev=724.47
00:16:33.286      clat (usec): min=13346, max=43162, avg=24979.53, stdev=5877.93
00:16:33.286       lat (usec): min=13385, max=43197, avg=25167.52, stdev=5881.92
00:16:33.286      clat percentiles (usec):
00:16:33.286       |  1.00th=[16188],  5.00th=[19006], 10.00th=[19530], 20.00th=[20055],
00:16:33.286       | 30.00th=[20841], 40.00th=[21103], 50.00th=[22414], 60.00th=[26084],
00:16:33.286       | 70.00th=[28443], 80.00th=[29230], 90.00th=[32637], 95.00th=[36963],
00:16:33.286       | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254],
00:16:33.286       | 99.99th=[43254]
00:16:33.286     bw (  KiB/s): min=10040, max=10176, per=18.38%, avg=10108.00, stdev=96.17, samples=2
00:16:33.286     iops        : min= 2510, max= 2544, avg=2527.00, stdev=24.04, samples=2
00:16:33.286    lat (msec)   : 4=0.15%, 10=0.49%, 20=12.34%, 50=87.03%
00:16:33.286    cpu          : usr=2.30%, sys=8.48%, ctx=301, majf=0, minf=15
00:16:33.286    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7%
00:16:33.286       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:33.286       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:16:33.286       issued rwts: total=2142,2560,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:33.286       latency   : target=0, window=0, percentile=100.00%, depth=128
00:16:33.286  job2: (groupid=0, jobs=1): err= 0: pid=76591: Mon Dec 16 06:26:49 2024
00:16:33.286    read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec)
00:16:33.286      slat (usec): min=8, max=4045, avg=110.82, stdev=495.02
00:16:33.286      clat (usec): min=8776, max=18943, avg=14420.77, stdev=1744.74
00:16:33.286       lat (usec): min=9516, max=18959, avg=14531.59, stdev=1697.89
00:16:33.286      clat percentiles (usec):
00:16:33.286       |  1.00th=[10290],  5.00th=[11207], 10.00th=[11994], 20.00th=[12649],
00:16:33.286       | 30.00th=[13173], 40.00th=[14484], 50.00th=[15008], 60.00th=[15401],
00:16:33.286       | 70.00th=[15664], 80.00th=[15795], 90.00th=[16319], 95.00th=[16581],
00:16:33.286       | 99.00th=[17433], 99.50th=[18220], 99.90th=[19006], 99.95th=[19006],
00:16:33.286       | 99.99th=[19006]
00:16:33.286    write: IOPS=4564, BW=17.8MiB/s (18.7MB/s)(17.9MiB/1002msec); 0 zone resets
00:16:33.286      slat (usec): min=11, max=4494, avg=111.02, stdev=406.38
00:16:33.286      clat (usec): min=1129, max=19297, avg=14741.19, stdev=2333.64
00:16:33.286       lat (usec): min=1151, max=19316, avg=14852.22, stdev=2327.25
00:16:33.286      clat percentiles (usec):
00:16:33.286       |  1.00th=[ 8160],  5.00th=[11076], 10.00th=[11600], 20.00th=[12780],
00:16:33.286       | 30.00th=[13435], 40.00th=[14615], 50.00th=[15533], 60.00th=[15795],
00:16:33.286       | 70.00th=[16319], 80.00th=[16712], 90.00th=[17171], 95.00th=[17433],
00:16:33.286       | 99.00th=[18220], 99.50th=[18482], 99.90th=[19268], 99.95th=[19268],
00:16:33.286       | 99.99th=[19268]
00:16:33.286     bw (  KiB/s): min=16352, max=19262, per=32.38%, avg=17807.00, stdev=2057.68, samples=2
00:16:33.286     iops        : min= 4088, max= 4815, avg=4451.50, stdev=514.07, samples=2
00:16:33.286    lat (msec)   : 2=0.14%, 10=1.12%, 20=98.74%
00:16:33.286    cpu          : usr=5.09%, sys=12.99%, ctx=725, majf=0, minf=7
00:16:33.286    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3%
00:16:33.286       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:33.286       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:16:33.286       issued rwts: total=4096,4574,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:33.286       latency   : target=0, window=0, percentile=100.00%, depth=128
00:16:33.286  job3: (groupid=0, jobs=1): err= 0: pid=76592: Mon Dec 16 06:26:49 2024
00:16:33.286    read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec)
00:16:33.286      slat (usec): min=7, max=6994, avg=219.57, stdev=877.64
00:16:33.286      clat (usec): min=5037, max=43431, avg=27942.05, stdev=7465.75
00:16:33.286       lat (usec): min=5053, max=43446, avg=28161.62, stdev=7469.37
00:16:33.286      clat percentiles (usec):
00:16:33.286       |  1.00th=[ 7701],  5.00th=[16450], 10.00th=[19530], 20.00th=[20841],
00:16:33.286       | 30.00th=[23725], 40.00th=[27132], 50.00th=[28443], 60.00th=[28967],
00:16:33.286       | 70.00th=[30016], 80.00th=[34866], 90.00th=[39060], 95.00th=[40633],
00:16:33.286       | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254],
00:16:33.286       | 99.99th=[43254]
00:16:33.286    write: IOPS=2553, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets
00:16:33.286      slat (usec): min=13, max=6754, avg=161.56, stdev=745.83
00:16:33.286      clat (usec): min=666, max=30165, avg=21359.75, stdev=3095.38
00:16:33.286       lat (usec): min=5031, max=30191, avg=21521.31, stdev=3041.36
00:16:33.286      clat percentiles (usec):
00:16:33.286       |  1.00th=[14746],  5.00th=[15926], 10.00th=[16581], 20.00th=[19792],
00:16:33.287       | 30.00th=[20317], 40.00th=[20841], 50.00th=[21103], 60.00th=[21627],
00:16:33.287       | 70.00th=[22152], 80.00th=[23200], 90.00th=[25297], 95.00th=[27132],
00:16:33.287       | 99.00th=[29492], 99.50th=[30016], 99.90th=[30278], 99.95th=[30278],
00:16:33.287       | 99.99th=[30278]
00:16:33.287     bw (  KiB/s): min= 8192, max=12312, per=18.64%, avg=10252.00, stdev=2913.28, samples=2
00:16:33.287     iops        : min= 2048, max= 3078, avg=2563.00, stdev=728.32, samples=2
00:16:33.287    lat (usec)   : 750=0.02%
00:16:33.287    lat (msec)   : 10=0.62%, 20=18.94%, 50=80.41%
00:16:33.287    cpu          : usr=2.30%, sys=9.38%, ctx=285, majf=0, minf=13
00:16:33.287    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8%
00:16:33.287       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:33.287       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:16:33.287       issued rwts: total=2560,2561,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:33.287       latency   : target=0, window=0, percentile=100.00%, depth=128
00:16:33.287  
00:16:33.287  Run status group 0 (all jobs):
00:16:33.287     READ: bw=50.0MiB/s (52.5MB/s), 8542KiB/s-16.0MiB/s (8747kB/s-16.7MB/s), io=50.2MiB (52.6MB), run=1002-1003msec
00:16:33.287    WRITE: bw=53.7MiB/s (56.3MB/s), 9.97MiB/s-17.8MiB/s (10.5MB/s-18.7MB/s), io=53.9MiB (56.5MB), run=1002-1003msec
00:16:33.287  
00:16:33.287  Disk stats (read/write):
00:16:33.287    nvme0n1: ios=3602/3584, merge=0/0, ticks=16228/14757, in_queue=30985, util=87.56%
00:16:33.287    nvme0n2: ios=1904/2048, merge=0/0, ticks=13606/12379, in_queue=25985, util=89.07%
00:16:33.287    nvme0n3: ios=3584/3943, merge=0/0, ticks=11979/13128, in_queue=25107, util=89.16%
00:16:33.287    nvme0n4: ios=2048/2505, merge=0/0, ticks=13696/11607, in_queue=25303, util=89.81%
00:16:33.287   06:26:49	-- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v
00:16:33.287  [global]
00:16:33.287  thread=1
00:16:33.287  invalidate=1
00:16:33.287  rw=randwrite
00:16:33.287  time_based=1
00:16:33.287  runtime=1
00:16:33.287  ioengine=libaio
00:16:33.287  direct=1
00:16:33.287  bs=4096
00:16:33.287  iodepth=128
00:16:33.287  norandommap=0
00:16:33.287  numjobs=1
00:16:33.287  
00:16:33.287  verify_dump=1
00:16:33.287  verify_backlog=512
00:16:33.287  verify_state_save=0
00:16:33.287  do_verify=1
00:16:33.287  verify=crc32c-intel
00:16:33.287  [job0]
00:16:33.287  filename=/dev/nvme0n1
00:16:33.287  [job1]
00:16:33.287  filename=/dev/nvme0n2
00:16:33.287  [job2]
00:16:33.287  filename=/dev/nvme0n3
00:16:33.287  [job3]
00:16:33.287  filename=/dev/nvme0n4
00:16:33.287  Could not set queue depth (nvme0n1)
00:16:33.287  Could not set queue depth (nvme0n2)
00:16:33.287  Could not set queue depth (nvme0n3)
00:16:33.287  Could not set queue depth (nvme0n4)
00:16:33.287  job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:16:33.287  job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:16:33.287  job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:16:33.287  job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:16:33.287  fio-3.35
00:16:33.287  Starting 4 threads
00:16:34.667  
00:16:34.667  job0: (groupid=0, jobs=1): err= 0: pid=76652: Mon Dec 16 06:26:51 2024
00:16:34.667    read: IOPS=2514, BW=9.82MiB/s (10.3MB/s)(10.0MiB/1018msec)
00:16:34.667      slat (usec): min=7, max=19450, avg=147.74, stdev=993.34
00:16:34.667      clat (usec): min=5981, max=79749, avg=17356.48, stdev=8579.92
00:16:34.667       lat (usec): min=5994, max=79766, avg=17504.22, stdev=8695.64
00:16:34.667      clat percentiles (usec):
00:16:34.667       |  1.00th=[ 9372],  5.00th=[10683], 10.00th=[11338], 20.00th=[12387],
00:16:34.667       | 30.00th=[13173], 40.00th=[13566], 50.00th=[15008], 60.00th=[16450],
00:16:34.667       | 70.00th=[17171], 80.00th=[21627], 90.00th=[25560], 95.00th=[31589],
00:16:34.667       | 99.00th=[64226], 99.50th=[78119], 99.90th=[80217], 99.95th=[80217],
00:16:34.667       | 99.99th=[80217]
00:16:34.667    write: IOPS=3000, BW=11.7MiB/s (12.3MB/s)(11.9MiB/1018msec); 0 zone resets
00:16:34.667      slat (usec): min=5, max=36845, avg=197.55, stdev=1286.82
00:16:34.667      clat (msec): min=4, max=126, avg=27.71, stdev=20.87
00:16:34.667       lat (msec): min=4, max=126, avg=27.91, stdev=20.98
00:16:34.667      clat percentiles (msec):
00:16:34.667       |  1.00th=[    6],  5.00th=[   12], 10.00th=[   12], 20.00th=[   13],
00:16:34.667       | 30.00th=[   14], 40.00th=[   16], 50.00th=[   25], 60.00th=[   27],
00:16:34.667       | 70.00th=[   28], 80.00th=[   39], 90.00th=[   57], 95.00th=[   69],
00:16:34.667       | 99.00th=[  114], 99.50th=[  124], 99.90th=[  127], 99.95th=[  127],
00:16:34.667       | 99.99th=[  127]
00:16:34.667     bw (  KiB/s): min=11120, max=12288, per=22.09%, avg=11704.00, stdev=825.90, samples=2
00:16:34.667     iops        : min= 2780, max= 3072, avg=2926.00, stdev=206.48, samples=2
00:16:34.667    lat (msec)   : 10=3.01%, 20=55.59%, 50=34.06%, 100=6.34%, 250=1.00%
00:16:34.667    cpu          : usr=3.24%, sys=7.18%, ctx=328, majf=0, minf=9
00:16:34.667    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9%
00:16:34.667       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:34.667       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:16:34.667       issued rwts: total=2560,3054,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:34.667       latency   : target=0, window=0, percentile=100.00%, depth=128
00:16:34.667  job1: (groupid=0, jobs=1): err= 0: pid=76653: Mon Dec 16 06:26:51 2024
00:16:34.667    read: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec)
00:16:34.667      slat (usec): min=6, max=19636, avg=160.84, stdev=1068.32
00:16:34.667      clat (usec): min=5626, max=45815, avg=19402.63, stdev=7961.90
00:16:34.667       lat (usec): min=5638, max=45851, avg=19563.47, stdev=8036.00
00:16:34.667      clat percentiles (usec):
00:16:34.667       |  1.00th=[ 7898],  5.00th=[10028], 10.00th=[11600], 20.00th=[12780],
00:16:34.667       | 30.00th=[13698], 40.00th=[14091], 50.00th=[16057], 60.00th=[21627],
00:16:34.667       | 70.00th=[24773], 80.00th=[25297], 90.00th=[32900], 95.00th=[33817],
00:16:34.667       | 99.00th=[39060], 99.50th=[40633], 99.90th=[40633], 99.95th=[40633],
00:16:34.667       | 99.99th=[45876]
00:16:34.667    write: IOPS=2498, BW=9994KiB/s (10.2MB/s)(9.83MiB/1007msec); 0 zone resets
00:16:34.667      slat (usec): min=6, max=26054, avg=260.44, stdev=1439.71
00:16:34.667      clat (usec): min=1891, max=126730, avg=34959.25, stdev=28150.86
00:16:34.667       lat (msec): min=4, max=126, avg=35.22, stdev=28.32
00:16:34.667      clat percentiles (msec):
00:16:34.667       |  1.00th=[    6],  5.00th=[   11], 10.00th=[   14], 20.00th=[   15],
00:16:34.667       | 30.00th=[   20], 40.00th=[   26], 50.00th=[   28], 60.00th=[   29],
00:16:34.667       | 70.00th=[   30], 80.00th=[   47], 90.00th=[   80], 95.00th=[  112],
00:16:34.667       | 99.00th=[  126], 99.50th=[  127], 99.90th=[  127], 99.95th=[  127],
00:16:34.667       | 99.99th=[  127]
00:16:34.667     bw (  KiB/s): min= 8833, max=10288, per=18.04%, avg=9560.50, stdev=1028.84, samples=2
00:16:34.667     iops        : min= 2208, max= 2572, avg=2390.00, stdev=257.39, samples=2
00:16:34.667    lat (msec)   : 2=0.02%, 10=4.12%, 20=38.72%, 50=46.67%, 100=6.66%
00:16:34.667    lat (msec)   : 250=3.81%
00:16:34.667    cpu          : usr=2.49%, sys=6.16%, ctx=302, majf=0, minf=7
00:16:34.667    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6%
00:16:34.667       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:34.667       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:16:34.667       issued rwts: total=2048,2516,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:34.667       latency   : target=0, window=0, percentile=100.00%, depth=128
00:16:34.667  job2: (groupid=0, jobs=1): err= 0: pid=76654: Mon Dec 16 06:26:51 2024
00:16:34.667    read: IOPS=4135, BW=16.2MiB/s (16.9MB/s)(16.3MiB/1009msec)
00:16:34.667      slat (usec): min=6, max=14167, avg=114.89, stdev=823.62
00:16:34.667      clat (usec): min=2171, max=29394, avg=15130.82, stdev=3438.86
00:16:34.667       lat (usec): min=5367, max=38271, avg=15245.71, stdev=3497.48
00:16:34.667      clat percentiles (usec):
00:16:34.667       |  1.00th=[ 9110],  5.00th=[11338], 10.00th=[12125], 20.00th=[13042],
00:16:34.667       | 30.00th=[13435], 40.00th=[13829], 50.00th=[14222], 60.00th=[14615],
00:16:34.667       | 70.00th=[15270], 80.00th=[17695], 90.00th=[19530], 95.00th=[22152],
00:16:34.667       | 99.00th=[27395], 99.50th=[27919], 99.90th=[29492], 99.95th=[29492],
00:16:34.667       | 99.99th=[29492]
00:16:34.667    write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets
00:16:34.667      slat (usec): min=5, max=12563, avg=106.15, stdev=770.93
00:16:34.667      clat (usec): min=3594, max=29365, avg=14036.25, stdev=2689.62
00:16:34.667       lat (usec): min=3620, max=29389, avg=14142.40, stdev=2799.96
00:16:34.667      clat percentiles (usec):
00:16:34.667       |  1.00th=[ 5342],  5.00th=[ 8356], 10.00th=[ 9634], 20.00th=[12911],
00:16:34.667       | 30.00th=[13698], 40.00th=[14091], 50.00th=[14615], 60.00th=[15008],
00:16:34.667       | 70.00th=[15533], 80.00th=[15926], 90.00th=[16319], 95.00th=[16450],
00:16:34.667       | 99.00th=[16909], 99.50th=[23462], 99.90th=[27919], 99.95th=[28181],
00:16:34.667       | 99.99th=[29492]
00:16:34.667     bw (  KiB/s): min=18032, max=18424, per=34.40%, avg=18228.00, stdev=277.19, samples=2
00:16:34.667     iops        : min= 4508, max= 4606, avg=4557.00, stdev=69.30, samples=2
00:16:34.667    lat (msec)   : 4=0.13%, 10=6.32%, 20=89.68%, 50=3.87%
00:16:34.667    cpu          : usr=4.17%, sys=12.50%, ctx=398, majf=0, minf=8
00:16:34.668    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3%
00:16:34.668       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:34.668       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:16:34.668       issued rwts: total=4173,4608,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:34.668       latency   : target=0, window=0, percentile=100.00%, depth=128
00:16:34.668  job3: (groupid=0, jobs=1): err= 0: pid=76655: Mon Dec 16 06:26:51 2024
00:16:34.668    read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec)
00:16:34.668      slat (usec): min=7, max=18955, avg=159.86, stdev=1098.80
00:16:34.668      clat (usec): min=6949, max=39475, avg=20693.73, stdev=4885.37
00:16:34.668       lat (usec): min=6997, max=39503, avg=20853.59, stdev=4947.18
00:16:34.668      clat percentiles (usec):
00:16:34.668       |  1.00th=[ 9896],  5.00th=[15401], 10.00th=[15795], 20.00th=[16909],
00:16:34.668       | 30.00th=[18482], 40.00th=[19268], 50.00th=[19530], 60.00th=[20317],
00:16:34.668       | 70.00th=[21365], 80.00th=[23725], 90.00th=[26870], 95.00th=[31065],
00:16:34.668       | 99.00th=[37487], 99.50th=[38011], 99.90th=[39584], 99.95th=[39584],
00:16:34.668       | 99.99th=[39584]
00:16:34.668    write: IOPS=3290, BW=12.9MiB/s (13.5MB/s)(12.9MiB/1005msec); 0 zone resets
00:16:34.668      slat (usec): min=6, max=16885, avg=145.22, stdev=1012.91
00:16:34.668      clat (usec): min=2719, max=39406, avg=19273.28, stdev=3766.80
00:16:34.668       lat (usec): min=5964, max=39417, avg=19418.50, stdev=3883.19
00:16:34.668      clat percentiles (usec):
00:16:34.668       |  1.00th=[ 7111],  5.00th=[10290], 10.00th=[14091], 20.00th=[17695],
00:16:34.668       | 30.00th=[18744], 40.00th=[19792], 50.00th=[20055], 60.00th=[20579],
00:16:34.668       | 70.00th=[21627], 80.00th=[22152], 90.00th=[22414], 95.00th=[22676],
00:16:34.668       | 99.00th=[23462], 99.50th=[31065], 99.90th=[39060], 99.95th=[39584],
00:16:34.668       | 99.99th=[39584]
00:16:34.668     bw (  KiB/s): min=12392, max=13066, per=24.02%, avg=12729.00, stdev=476.59, samples=2
00:16:34.668     iops        : min= 3098, max= 3266, avg=3182.00, stdev=118.79, samples=2
00:16:34.668    lat (msec)   : 4=0.02%, 10=2.68%, 20=48.64%, 50=48.66%
00:16:34.668    cpu          : usr=3.29%, sys=9.76%, ctx=319, majf=0, minf=11
00:16:34.668    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0%
00:16:34.668       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:34.668       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:16:34.668       issued rwts: total=3072,3307,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:34.668       latency   : target=0, window=0, percentile=100.00%, depth=128
00:16:34.668  
00:16:34.668  Run status group 0 (all jobs):
00:16:34.668     READ: bw=45.5MiB/s (47.7MB/s), 8135KiB/s-16.2MiB/s (8330kB/s-16.9MB/s), io=46.3MiB (48.5MB), run=1005-1018msec
00:16:34.668    WRITE: bw=51.7MiB/s (54.3MB/s), 9994KiB/s-17.8MiB/s (10.2MB/s-18.7MB/s), io=52.7MiB (55.2MB), run=1005-1018msec
00:16:34.668  
00:16:34.668  Disk stats (read/write):
00:16:34.668    nvme0n1: ios=2610/2623, merge=0/0, ticks=42038/60270, in_queue=102308, util=87.27%
00:16:34.668    nvme0n2: ios=1581/1839, merge=0/0, ticks=28768/74057, in_queue=102825, util=88.15%
00:16:34.668    nvme0n3: ios=3584/3879, merge=0/0, ticks=49891/50734, in_queue=100625, util=89.17%
00:16:34.668    nvme0n4: ios=2560/2823, merge=0/0, ticks=50324/51658, in_queue=101982, util=89.53%
00:16:34.668   06:26:51	-- target/fio.sh@55 -- # sync
00:16:34.668   06:26:51	-- target/fio.sh@59 -- # fio_pid=76672
00:16:34.668   06:26:51	-- target/fio.sh@61 -- # sleep 3
00:16:34.668   06:26:51	-- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10
00:16:34.668  [global]
00:16:34.668  thread=1
00:16:34.668  invalidate=1
00:16:34.668  rw=read
00:16:34.668  time_based=1
00:16:34.668  runtime=10
00:16:34.668  ioengine=libaio
00:16:34.668  direct=1
00:16:34.668  bs=4096
00:16:34.668  iodepth=1
00:16:34.668  norandommap=1
00:16:34.668  numjobs=1
00:16:34.668  
00:16:34.668  [job0]
00:16:34.668  filename=/dev/nvme0n1
00:16:34.668  [job1]
00:16:34.668  filename=/dev/nvme0n2
00:16:34.668  [job2]
00:16:34.668  filename=/dev/nvme0n3
00:16:34.668  [job3]
00:16:34.668  filename=/dev/nvme0n4
00:16:34.668  Could not set queue depth (nvme0n1)
00:16:34.668  Could not set queue depth (nvme0n2)
00:16:34.668  Could not set queue depth (nvme0n3)
00:16:34.668  Could not set queue depth (nvme0n4)
00:16:34.668  job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:16:34.668  job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:16:34.668  job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:16:34.668  job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:16:34.668  fio-3.35
00:16:34.668  Starting 4 threads
00:16:37.958   06:26:54	-- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0
00:16:37.958  fio: pid=76716, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:16:37.958  fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=39698432, buflen=4096
00:16:37.958   06:26:54	-- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0
00:16:38.217  fio: pid=76715, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:16:38.217  fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=43433984, buflen=4096
00:16:38.217   06:26:54	-- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:16:38.217   06:26:54	-- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0
00:16:38.476  fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=38465536, buflen=4096
00:16:38.476  fio: pid=76713, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:16:38.476   06:26:55	-- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:16:38.476   06:26:55	-- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1
00:16:38.476  fio: pid=76714, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:16:38.476  fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=41955328, buflen=4096
00:16:38.735  
00:16:38.735  job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76713: Mon Dec 16 06:26:55 2024
00:16:38.735    read: IOPS=2680, BW=10.5MiB/s (11.0MB/s)(36.7MiB/3504msec)
00:16:38.735      slat (usec): min=9, max=11432, avg=21.05, stdev=176.14
00:16:38.735      clat (usec): min=109, max=8026, avg=350.25, stdev=137.18
00:16:38.735       lat (usec): min=145, max=11745, avg=371.29, stdev=223.92
00:16:38.735      clat percentiles (usec):
00:16:38.735       |  1.00th=[  153],  5.00th=[  178], 10.00th=[  200], 20.00th=[  237],
00:16:38.735       | 30.00th=[  343], 40.00th=[  363], 50.00th=[  375], 60.00th=[  388],
00:16:38.735       | 70.00th=[  400], 80.00th=[  416], 90.00th=[  441], 95.00th=[  457],
00:16:38.735       | 99.00th=[  502], 99.50th=[  553], 99.90th=[  750], 99.95th=[ 2474],
00:16:38.735       | 99.99th=[ 8029]
00:16:38.735     bw (  KiB/s): min= 9320, max=11784, per=23.37%, avg=9953.33, stdev=907.36, samples=6
00:16:38.736     iops        : min= 2330, max= 2946, avg=2488.33, stdev=226.84, samples=6
00:16:38.736    lat (usec)   : 250=22.51%, 500=76.41%, 750=0.98%, 1000=0.01%
00:16:38.736    lat (msec)   : 2=0.03%, 4=0.03%, 10=0.02%
00:16:38.736    cpu          : usr=0.69%, sys=4.00%, ctx=9414, majf=0, minf=1
00:16:38.736    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:16:38.736       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:38.736       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:38.736       issued rwts: total=9392,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:38.736       latency   : target=0, window=0, percentile=100.00%, depth=1
00:16:38.736  job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76714: Mon Dec 16 06:26:55 2024
00:16:38.736    read: IOPS=2731, BW=10.7MiB/s (11.2MB/s)(40.0MiB/3750msec)
00:16:38.736      slat (usec): min=6, max=15423, avg=30.55, stdev=231.02
00:16:38.736      clat (usec): min=106, max=4290, avg=333.23, stdev=131.12
00:16:38.736       lat (usec): min=141, max=15660, avg=363.78, stdev=264.71
00:16:38.736      clat percentiles (usec):
00:16:38.736       |  1.00th=[  139],  5.00th=[  147], 10.00th=[  161], 20.00th=[  215],
00:16:38.736       | 30.00th=[  318], 40.00th=[  347], 50.00th=[  359], 60.00th=[  375],
00:16:38.736       | 70.00th=[  388], 80.00th=[  408], 90.00th=[  429], 95.00th=[  449],
00:16:38.736       | 99.00th=[  486], 99.50th=[  515], 99.90th=[ 1401], 99.95th=[ 2769],
00:16:38.736       | 99.99th=[ 4146]
00:16:38.736     bw (  KiB/s): min= 9560, max=14662, per=24.34%, avg=10366.57, stdev=1894.73, samples=7
00:16:38.736     iops        : min= 2390, max= 3665, avg=2591.57, stdev=473.49, samples=7
00:16:38.736    lat (usec)   : 250=23.55%, 500=75.81%, 750=0.49%, 1000=0.02%
00:16:38.736    lat (msec)   : 2=0.05%, 4=0.06%, 10=0.02%
00:16:38.736    cpu          : usr=1.23%, sys=5.84%, ctx=10289, majf=0, minf=2
00:16:38.736    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:16:38.736       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:38.736       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:38.736       issued rwts: total=10244,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:38.736       latency   : target=0, window=0, percentile=100.00%, depth=1
00:16:38.736  job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76715: Mon Dec 16 06:26:55 2024
00:16:38.736    read: IOPS=3243, BW=12.7MiB/s (13.3MB/s)(41.4MiB/3270msec)
00:16:38.736      slat (usec): min=7, max=11470, avg=18.16, stdev=144.59
00:16:38.736      clat (usec): min=84, max=4184, avg=288.60, stdev=101.96
00:16:38.736       lat (usec): min=151, max=11758, avg=306.77, stdev=177.34
00:16:38.736      clat percentiles (usec):
00:16:38.736       |  1.00th=[  167],  5.00th=[  180], 10.00th=[  188], 20.00th=[  206],
00:16:38.736       | 30.00th=[  231], 40.00th=[  285], 50.00th=[  306], 60.00th=[  318],
00:16:38.736       | 70.00th=[  330], 80.00th=[  347], 90.00th=[  367], 95.00th=[  388],
00:16:38.736       | 99.00th=[  433], 99.50th=[  465], 99.90th=[  881], 99.95th=[ 1975],
00:16:38.736       | 99.99th=[ 3392]
00:16:38.736     bw (  KiB/s): min=11032, max=16896, per=31.00%, avg=13204.00, stdev=2849.67, samples=6
00:16:38.736     iops        : min= 2758, max= 4224, avg=3301.00, stdev=712.42, samples=6
00:16:38.736    lat (usec)   : 100=0.01%, 250=35.44%, 500=64.15%, 750=0.25%, 1000=0.06%
00:16:38.736    lat (msec)   : 2=0.05%, 4=0.04%, 10=0.01%
00:16:38.736    cpu          : usr=0.95%, sys=4.31%, ctx=10640, majf=0, minf=2
00:16:38.736    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:16:38.736       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:38.736       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:38.736       issued rwts: total=10605,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:38.736       latency   : target=0, window=0, percentile=100.00%, depth=1
00:16:38.736  job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76716: Mon Dec 16 06:26:55 2024
00:16:38.736    read: IOPS=3272, BW=12.8MiB/s (13.4MB/s)(37.9MiB/2962msec)
00:16:38.736      slat (nsec): min=7263, max=68350, avg=14667.02, stdev=5294.32
00:16:38.736      clat (usec): min=156, max=7777, avg=289.35, stdev=115.76
00:16:38.736       lat (usec): min=169, max=7790, avg=304.02, stdev=115.21
00:16:38.736      clat percentiles (usec):
00:16:38.736       |  1.00th=[  169],  5.00th=[  180], 10.00th=[  188], 20.00th=[  208],
00:16:38.736       | 30.00th=[  233], 40.00th=[  281], 50.00th=[  310], 60.00th=[  322],
00:16:38.736       | 70.00th=[  334], 80.00th=[  347], 90.00th=[  367], 95.00th=[  388],
00:16:38.736       | 99.00th=[  433], 99.50th=[  453], 99.90th=[  865], 99.95th=[ 1827],
00:16:38.736       | 99.99th=[ 7767]
00:16:38.736     bw (  KiB/s): min=11448, max=16760, per=31.75%, avg=13524.80, stdev=2802.55, samples=5
00:16:38.736     iops        : min= 2862, max= 4190, avg=3381.20, stdev=700.64, samples=5
00:16:38.736    lat (usec)   : 250=35.46%, 500=64.24%, 750=0.19%, 1000=0.01%
00:16:38.736    lat (msec)   : 2=0.05%, 4=0.03%, 10=0.01%
00:16:38.736    cpu          : usr=0.74%, sys=4.15%, ctx=9701, majf=0, minf=1
00:16:38.736    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:16:38.736       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:38.736       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:38.736       issued rwts: total=9693,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:38.736       latency   : target=0, window=0, percentile=100.00%, depth=1
00:16:38.736  
00:16:38.736  Run status group 0 (all jobs):
00:16:38.736     READ: bw=41.6MiB/s (43.6MB/s), 10.5MiB/s-12.8MiB/s (11.0MB/s-13.4MB/s), io=156MiB (164MB), run=2962-3750msec
00:16:38.736  
00:16:38.736  Disk stats (read/write):
00:16:38.736    nvme0n1: ios=8753/0, merge=0/0, ticks=3203/0, in_queue=3203, util=95.39%
00:16:38.736    nvme0n2: ios=9485/0, merge=0/0, ticks=3295/0, in_queue=3295, util=95.34%
00:16:38.736    nvme0n3: ios=10174/0, merge=0/0, ticks=2924/0, in_queue=2924, util=96.12%
00:16:38.736    nvme0n4: ios=9470/0, merge=0/0, ticks=2713/0, in_queue=2713, util=96.59%
00:16:38.736   06:26:55	-- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:16:38.736   06:26:55	-- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2
00:16:38.995   06:26:55	-- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:16:38.995   06:26:55	-- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3
00:16:39.254   06:26:55	-- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:16:39.254   06:26:55	-- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4
00:16:39.513   06:26:56	-- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:16:39.513   06:26:56	-- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5
00:16:39.513   06:26:56	-- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:16:39.513   06:26:56	-- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6
00:16:40.082   06:26:56	-- target/fio.sh@69 -- # fio_status=0
00:16:40.082   06:26:56	-- target/fio.sh@70 -- # wait 76672
00:16:40.082   06:26:56	-- target/fio.sh@70 -- # fio_status=4
00:16:40.082   06:26:56	-- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:16:40.082  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:40.082   06:26:56	-- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:16:40.082   06:26:56	-- common/autotest_common.sh@1208 -- # local i=0
00:16:40.082   06:26:56	-- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL
00:16:40.082   06:26:56	-- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME
00:16:40.082   06:26:56	-- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL
00:16:40.082   06:26:56	-- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME
00:16:40.082  nvmf hotplug test: fio failed as expected
00:16:40.082   06:26:56	-- common/autotest_common.sh@1220 -- # return 0
00:16:40.082   06:26:56	-- target/fio.sh@75 -- # '[' 4 -eq 0 ']'
00:16:40.082   06:26:56	-- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected'
00:16:40.082   06:26:56	-- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:16:40.341   06:26:57	-- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state
00:16:40.341   06:26:57	-- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state
00:16:40.341   06:26:57	-- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state
00:16:40.341   06:26:57	-- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT
00:16:40.341   06:26:57	-- target/fio.sh@91 -- # nvmftestfini
00:16:40.341   06:26:57	-- nvmf/common.sh@476 -- # nvmfcleanup
00:16:40.341   06:26:57	-- nvmf/common.sh@116 -- # sync
00:16:40.341   06:26:57	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:16:40.341   06:26:57	-- nvmf/common.sh@119 -- # set +e
00:16:40.341   06:26:57	-- nvmf/common.sh@120 -- # for i in {1..20}
00:16:40.341   06:26:57	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:16:40.341  rmmod nvme_tcp
00:16:40.341  rmmod nvme_fabrics
00:16:40.341  rmmod nvme_keyring
00:16:40.341   06:26:57	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:16:40.341   06:26:57	-- nvmf/common.sh@123 -- # set -e
00:16:40.341   06:26:57	-- nvmf/common.sh@124 -- # return 0
00:16:40.341   06:26:57	-- nvmf/common.sh@477 -- # '[' -n 76173 ']'
00:16:40.341   06:26:57	-- nvmf/common.sh@478 -- # killprocess 76173
00:16:40.341   06:26:57	-- common/autotest_common.sh@936 -- # '[' -z 76173 ']'
00:16:40.341   06:26:57	-- common/autotest_common.sh@940 -- # kill -0 76173
00:16:40.341    06:26:57	-- common/autotest_common.sh@941 -- # uname
00:16:40.341   06:26:57	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:16:40.341    06:26:57	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76173
00:16:40.341  killing process with pid 76173
00:16:40.341   06:26:57	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:16:40.341   06:26:57	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:16:40.341   06:26:57	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 76173'
00:16:40.341   06:26:57	-- common/autotest_common.sh@955 -- # kill 76173
00:16:40.341   06:26:57	-- common/autotest_common.sh@960 -- # wait 76173
00:16:40.600   06:26:57	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:16:40.600   06:26:57	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:16:40.600   06:26:57	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:16:40.600   06:26:57	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:16:40.600   06:26:57	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:16:40.600   06:26:57	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:40.600   06:26:57	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:16:40.600    06:26:57	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:40.600   06:26:57	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:16:40.600  
00:16:40.600  real	0m19.575s
00:16:40.600  user	1m15.273s
00:16:40.600  sys	0m7.937s
00:16:40.600   06:26:57	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:16:40.600  ************************************
00:16:40.600  END TEST nvmf_fio_target
00:16:40.600  ************************************
00:16:40.600   06:26:57	-- common/autotest_common.sh@10 -- # set +x
00:16:40.859   06:26:57	-- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp
00:16:40.859   06:26:57	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:16:40.859   06:26:57	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:16:40.859   06:26:57	-- common/autotest_common.sh@10 -- # set +x
00:16:40.859  ************************************
00:16:40.859  START TEST nvmf_bdevio
00:16:40.859  ************************************
00:16:40.859   06:26:57	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp
00:16:40.859  * Looking for test storage...
00:16:40.859  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:16:40.859    06:26:57	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:16:40.859     06:26:57	-- common/autotest_common.sh@1690 -- # lcov --version
00:16:40.859     06:26:57	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:16:40.859    06:26:57	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:16:40.859    06:26:57	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:16:40.859    06:26:57	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:16:40.859    06:26:57	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:16:40.859    06:26:57	-- scripts/common.sh@335 -- # IFS=.-:
00:16:40.859    06:26:57	-- scripts/common.sh@335 -- # read -ra ver1
00:16:40.859    06:26:57	-- scripts/common.sh@336 -- # IFS=.-:
00:16:40.860    06:26:57	-- scripts/common.sh@336 -- # read -ra ver2
00:16:40.860    06:26:57	-- scripts/common.sh@337 -- # local 'op=<'
00:16:40.860    06:26:57	-- scripts/common.sh@339 -- # ver1_l=2
00:16:40.860    06:26:57	-- scripts/common.sh@340 -- # ver2_l=1
00:16:40.860    06:26:57	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:16:40.860    06:26:57	-- scripts/common.sh@343 -- # case "$op" in
00:16:40.860    06:26:57	-- scripts/common.sh@344 -- # : 1
00:16:40.860    06:26:57	-- scripts/common.sh@363 -- # (( v = 0 ))
00:16:40.860    06:26:57	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:40.860     06:26:57	-- scripts/common.sh@364 -- # decimal 1
00:16:40.860     06:26:57	-- scripts/common.sh@352 -- # local d=1
00:16:40.860     06:26:57	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:40.860     06:26:57	-- scripts/common.sh@354 -- # echo 1
00:16:40.860    06:26:57	-- scripts/common.sh@364 -- # ver1[v]=1
00:16:40.860     06:26:57	-- scripts/common.sh@365 -- # decimal 2
00:16:40.860     06:26:57	-- scripts/common.sh@352 -- # local d=2
00:16:40.860     06:26:57	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:40.860     06:26:57	-- scripts/common.sh@354 -- # echo 2
00:16:40.860    06:26:57	-- scripts/common.sh@365 -- # ver2[v]=2
00:16:40.860    06:26:57	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:16:40.860    06:26:57	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:16:40.860    06:26:57	-- scripts/common.sh@367 -- # return 0
00:16:40.860    06:26:57	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:40.860    06:26:57	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:16:40.860  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:40.860  		--rc genhtml_branch_coverage=1
00:16:40.860  		--rc genhtml_function_coverage=1
00:16:40.860  		--rc genhtml_legend=1
00:16:40.860  		--rc geninfo_all_blocks=1
00:16:40.860  		--rc geninfo_unexecuted_blocks=1
00:16:40.860  		
00:16:40.860  		'
00:16:40.860    06:26:57	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:16:40.860  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:40.860  		--rc genhtml_branch_coverage=1
00:16:40.860  		--rc genhtml_function_coverage=1
00:16:40.860  		--rc genhtml_legend=1
00:16:40.860  		--rc geninfo_all_blocks=1
00:16:40.860  		--rc geninfo_unexecuted_blocks=1
00:16:40.860  		
00:16:40.860  		'
00:16:40.860    06:26:57	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:16:40.860  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:40.860  		--rc genhtml_branch_coverage=1
00:16:40.860  		--rc genhtml_function_coverage=1
00:16:40.860  		--rc genhtml_legend=1
00:16:40.860  		--rc geninfo_all_blocks=1
00:16:40.860  		--rc geninfo_unexecuted_blocks=1
00:16:40.860  		
00:16:40.860  		'
00:16:40.860    06:26:57	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:16:40.860  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:40.860  		--rc genhtml_branch_coverage=1
00:16:40.860  		--rc genhtml_function_coverage=1
00:16:40.860  		--rc genhtml_legend=1
00:16:40.860  		--rc geninfo_all_blocks=1
00:16:40.860  		--rc geninfo_unexecuted_blocks=1
00:16:40.860  		
00:16:40.860  		'
00:16:40.860   06:26:57	-- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:16:40.860     06:26:57	-- nvmf/common.sh@7 -- # uname -s
00:16:40.860    06:26:57	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:16:40.860    06:26:57	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:16:40.860    06:26:57	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:16:40.860    06:26:57	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:16:40.860    06:26:57	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:16:40.860    06:26:57	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:16:40.860    06:26:57	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:16:40.860    06:26:57	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:16:40.860    06:26:57	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:16:40.860     06:26:57	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:16:40.860    06:26:57	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:16:40.860    06:26:57	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:16:40.860    06:26:57	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:16:40.860    06:26:57	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:16:40.860    06:26:57	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:16:40.860    06:26:57	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:16:40.860     06:26:57	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:16:40.860     06:26:57	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:16:40.860     06:26:57	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:16:40.860      06:26:57	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:40.860      06:26:57	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:40.860      06:26:57	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:40.860      06:26:57	-- paths/export.sh@5 -- # export PATH
00:16:40.860      06:26:57	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:40.860    06:26:57	-- nvmf/common.sh@46 -- # : 0
00:16:40.860    06:26:57	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:16:40.860    06:26:57	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:16:40.860    06:26:57	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:16:40.860    06:26:57	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:16:40.860    06:26:57	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:16:40.860    06:26:57	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:16:40.860    06:26:57	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:16:40.860    06:26:57	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:16:40.860   06:26:57	-- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64
00:16:40.860   06:26:57	-- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:16:40.860   06:26:57	-- target/bdevio.sh@14 -- # nvmftestinit
00:16:40.860   06:26:57	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:16:40.860   06:26:57	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:16:40.860   06:26:57	-- nvmf/common.sh@436 -- # prepare_net_devs
00:16:40.860   06:26:57	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:16:40.860   06:26:57	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:16:40.860   06:26:57	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:40.860   06:26:57	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:16:40.860    06:26:57	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:40.860   06:26:57	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:16:40.860   06:26:57	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:16:40.860   06:26:57	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:16:40.860   06:26:57	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:16:40.860   06:26:57	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:16:40.860   06:26:57	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:16:40.860   06:26:57	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:16:40.860   06:26:57	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:16:40.860   06:26:57	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:16:40.860   06:26:57	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:16:40.860   06:26:57	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:16:40.860   06:26:57	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:16:40.860   06:26:57	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:16:40.860   06:26:57	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:16:40.860   06:26:57	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:16:40.860   06:26:57	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:16:40.860   06:26:57	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:16:40.860   06:26:57	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:16:40.860   06:26:57	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:16:40.861   06:26:57	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:16:40.861  Cannot find device "nvmf_tgt_br"
00:16:40.861   06:26:57	-- nvmf/common.sh@154 -- # true
00:16:40.861   06:26:57	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:16:40.861  Cannot find device "nvmf_tgt_br2"
00:16:40.861   06:26:57	-- nvmf/common.sh@155 -- # true
00:16:40.861   06:26:57	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:16:41.120   06:26:57	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:16:41.120  Cannot find device "nvmf_tgt_br"
00:16:41.120   06:26:57	-- nvmf/common.sh@157 -- # true
00:16:41.120   06:26:57	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:16:41.120  Cannot find device "nvmf_tgt_br2"
00:16:41.120   06:26:57	-- nvmf/common.sh@158 -- # true
00:16:41.120   06:26:57	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:16:41.120   06:26:57	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:16:41.120   06:26:57	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:16:41.120  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:16:41.120   06:26:57	-- nvmf/common.sh@161 -- # true
00:16:41.120   06:26:57	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:16:41.120  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:16:41.120   06:26:57	-- nvmf/common.sh@162 -- # true
00:16:41.120   06:26:57	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:16:41.120   06:26:57	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:16:41.120   06:26:57	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:16:41.120   06:26:57	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:16:41.120   06:26:57	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:16:41.120   06:26:57	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:16:41.120   06:26:58	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:16:41.120   06:26:58	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:16:41.120   06:26:58	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:16:41.120   06:26:58	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:16:41.120   06:26:58	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:16:41.120   06:26:58	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:16:41.120   06:26:58	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:16:41.120   06:26:58	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:16:41.120   06:26:58	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:16:41.120   06:26:58	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:16:41.120   06:26:58	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:16:41.120   06:26:58	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:16:41.120   06:26:58	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:16:41.120   06:26:58	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:16:41.120   06:26:58	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:16:41.380   06:26:58	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:16:41.380   06:26:58	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:16:41.380   06:26:58	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:16:41.380  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:16:41.380  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms
00:16:41.380  
00:16:41.380  --- 10.0.0.2 ping statistics ---
00:16:41.380  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:41.380  rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms
00:16:41.380   06:26:58	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:16:41.380  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:16:41.380  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms
00:16:41.380  
00:16:41.380  --- 10.0.0.3 ping statistics ---
00:16:41.380  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:41.380  rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms
00:16:41.380   06:26:58	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:16:41.380  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:16:41.380  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms
00:16:41.380  
00:16:41.380  --- 10.0.0.1 ping statistics ---
00:16:41.380  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:41.380  rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms
00:16:41.380   06:26:58	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:16:41.380   06:26:58	-- nvmf/common.sh@421 -- # return 0
00:16:41.380   06:26:58	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:16:41.380   06:26:58	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:16:41.380   06:26:58	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:16:41.380   06:26:58	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:16:41.380   06:26:58	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:16:41.380   06:26:58	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:16:41.380   06:26:58	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:16:41.380   06:26:58	-- target/bdevio.sh@16 -- # nvmfappstart -m 0x78
00:16:41.380   06:26:58	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:16:41.380   06:26:58	-- common/autotest_common.sh@722 -- # xtrace_disable
00:16:41.380   06:26:58	-- common/autotest_common.sh@10 -- # set +x
00:16:41.380   06:26:58	-- nvmf/common.sh@469 -- # nvmfpid=77041
00:16:41.380   06:26:58	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78
00:16:41.380   06:26:58	-- nvmf/common.sh@470 -- # waitforlisten 77041
00:16:41.380   06:26:58	-- common/autotest_common.sh@829 -- # '[' -z 77041 ']'
00:16:41.380   06:26:58	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:41.380   06:26:58	-- common/autotest_common.sh@834 -- # local max_retries=100
00:16:41.380  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:41.380   06:26:58	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:41.380   06:26:58	-- common/autotest_common.sh@838 -- # xtrace_disable
00:16:41.380   06:26:58	-- common/autotest_common.sh@10 -- # set +x
00:16:41.380  [2024-12-16 06:26:58.241822] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:16:41.380  [2024-12-16 06:26:58.241958] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:16:41.641  [2024-12-16 06:26:58.389733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:16:41.641  [2024-12-16 06:26:58.474798] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:16:41.641  [2024-12-16 06:26:58.474958] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:16:41.641  [2024-12-16 06:26:58.474970] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:16:41.641  [2024-12-16 06:26:58.474978] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:16:41.641  [2024-12-16 06:26:58.475807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4
00:16:41.641  [2024-12-16 06:26:58.475958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5
00:16:41.641  [2024-12-16 06:26:58.476035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6
00:16:41.641  [2024-12-16 06:26:58.476043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:16:42.576   06:26:59	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:16:42.576   06:26:59	-- common/autotest_common.sh@862 -- # return 0
00:16:42.576   06:26:59	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:16:42.576   06:26:59	-- common/autotest_common.sh@728 -- # xtrace_disable
00:16:42.576   06:26:59	-- common/autotest_common.sh@10 -- # set +x
00:16:42.576   06:26:59	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:16:42.576   06:26:59	-- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:16:42.576   06:26:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:42.576   06:26:59	-- common/autotest_common.sh@10 -- # set +x
00:16:42.576  [2024-12-16 06:26:59.317462] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:16:42.576   06:26:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:42.576   06:26:59	-- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:16:42.576   06:26:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:42.576   06:26:59	-- common/autotest_common.sh@10 -- # set +x
00:16:42.576  Malloc0
00:16:42.576   06:26:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:42.576   06:26:59	-- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:16:42.576   06:26:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:42.576   06:26:59	-- common/autotest_common.sh@10 -- # set +x
00:16:42.576   06:26:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:42.576   06:26:59	-- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:16:42.576   06:26:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:42.576   06:26:59	-- common/autotest_common.sh@10 -- # set +x
00:16:42.576   06:26:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:42.576   06:26:59	-- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:16:42.576   06:26:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:42.576   06:26:59	-- common/autotest_common.sh@10 -- # set +x
00:16:42.576  [2024-12-16 06:26:59.401885] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:16:42.576   06:26:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:42.576   06:26:59	-- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62
00:16:42.576    06:26:59	-- target/bdevio.sh@24 -- # gen_nvmf_target_json
00:16:42.576    06:26:59	-- nvmf/common.sh@520 -- # config=()
00:16:42.576    06:26:59	-- nvmf/common.sh@520 -- # local subsystem config
00:16:42.576    06:26:59	-- nvmf/common.sh@522 -- # for subsystem in "${@:-1}"
00:16:42.576    06:26:59	-- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF
00:16:42.576  {
00:16:42.576    "params": {
00:16:42.576      "name": "Nvme$subsystem",
00:16:42.576      "trtype": "$TEST_TRANSPORT",
00:16:42.576      "traddr": "$NVMF_FIRST_TARGET_IP",
00:16:42.576      "adrfam": "ipv4",
00:16:42.576      "trsvcid": "$NVMF_PORT",
00:16:42.576      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:16:42.576      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:16:42.576      "hdgst": ${hdgst:-false},
00:16:42.576      "ddgst": ${ddgst:-false}
00:16:42.576    },
00:16:42.576    "method": "bdev_nvme_attach_controller"
00:16:42.576  }
00:16:42.576  EOF
00:16:42.576  )")
00:16:42.576     06:26:59	-- nvmf/common.sh@542 -- # cat
00:16:42.576    06:26:59	-- nvmf/common.sh@544 -- # jq .
00:16:42.576     06:26:59	-- nvmf/common.sh@545 -- # IFS=,
00:16:42.576     06:26:59	-- nvmf/common.sh@546 -- # printf '%s\n' '{
00:16:42.576    "params": {
00:16:42.576      "name": "Nvme1",
00:16:42.576      "trtype": "tcp",
00:16:42.576      "traddr": "10.0.0.2",
00:16:42.576      "adrfam": "ipv4",
00:16:42.576      "trsvcid": "4420",
00:16:42.576      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:16:42.576      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:16:42.576      "hdgst": false,
00:16:42.576      "ddgst": false
00:16:42.576    },
00:16:42.576    "method": "bdev_nvme_attach_controller"
00:16:42.576  }'
00:16:42.576  [2024-12-16 06:26:59.453130] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:16:42.576  [2024-12-16 06:26:59.453185] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77105 ]
00:16:42.835  [2024-12-16 06:26:59.585377] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:16:42.835  [2024-12-16 06:26:59.677379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:16:42.835  [2024-12-16 06:26:59.677525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:16:42.835  [2024-12-16 06:26:59.677527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:16:43.092  [2024-12-16 06:26:59.878221] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another.
00:16:43.092  [2024-12-16 06:26:59.878270] rpc.c:  90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock
00:16:43.092  I/O targets:
00:16:43.092    Nvme1n1: 131072 blocks of 512 bytes (64 MiB)
00:16:43.092  
00:16:43.092  
00:16:43.092       CUnit - A unit testing framework for C - Version 2.1-3
00:16:43.092       http://cunit.sourceforge.net/
00:16:43.092  
00:16:43.092  
00:16:43.092  Suite: bdevio tests on: Nvme1n1
00:16:43.092    Test: blockdev write read block ...passed
00:16:43.092    Test: blockdev write zeroes read block ...passed
00:16:43.092    Test: blockdev write zeroes read no split ...passed
00:16:43.092    Test: blockdev write zeroes read split ...passed
00:16:43.092    Test: blockdev write zeroes read split partial ...passed
00:16:43.092    Test: blockdev reset ...[2024-12-16 06:26:59.998382] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller
00:16:43.092  [2024-12-16 06:26:59.998511] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaff910 (9): Bad file descriptor
00:16:43.092  [2024-12-16 06:27:00.018169] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:16:43.092  passed
00:16:43.092    Test: blockdev write read 8 blocks ...passed
00:16:43.092    Test: blockdev write read size > 128k ...passed
00:16:43.092    Test: blockdev write read invalid size ...passed
00:16:43.092    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:16:43.092    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:16:43.092    Test: blockdev write read max offset ...passed
00:16:43.350    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:16:43.350    Test: blockdev writev readv 8 blocks ...passed
00:16:43.350    Test: blockdev writev readv 30 x 1block ...passed
00:16:43.350    Test: blockdev writev readv block ...passed
00:16:43.350    Test: blockdev writev readv size > 128k ...passed
00:16:43.350    Test: blockdev writev readv size > 128k in two iovs ...passed
00:16:43.350    Test: blockdev comparev and writev ...[2024-12-16 06:27:00.189385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:16:43.350  [2024-12-16 06:27:00.189459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:16:43.350  [2024-12-16 06:27:00.189478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:16:43.350  [2024-12-16 06:27:00.189508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:16:43.350  [2024-12-16 06:27:00.190098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:16:43.350  [2024-12-16 06:27:00.190142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:16:43.350  [2024-12-16 06:27:00.190158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:16:43.350  [2024-12-16 06:27:00.190167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:16:43.350  [2024-12-16 06:27:00.190789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:16:43.350  [2024-12-16 06:27:00.190833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:16:43.350  [2024-12-16 06:27:00.190873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:16:43.350  [2024-12-16 06:27:00.190883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:16:43.350  [2024-12-16 06:27:00.191476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:16:43.350  [2024-12-16 06:27:00.191526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:16:43.350  [2024-12-16 06:27:00.191542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:16:43.350  [2024-12-16 06:27:00.191552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:16:43.350  passed
00:16:43.350    Test: blockdev nvme passthru rw ...passed
00:16:43.350    Test: blockdev nvme passthru vendor specific ...[2024-12-16 06:27:00.273851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:16:43.350  [2024-12-16 06:27:00.273894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:16:43.350  [2024-12-16 06:27:00.274154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:16:43.350  [2024-12-16 06:27:00.274180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:16:43.350  [2024-12-16 06:27:00.274423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:16:43.350  [2024-12-16 06:27:00.274449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:16:43.350  [2024-12-16 06:27:00.274687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:16:43.350  [2024-12-16 06:27:00.274714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:16:43.350  passed
00:16:43.350    Test: blockdev nvme admin passthru ...passed
00:16:43.609    Test: blockdev copy ...passed
00:16:43.609  
00:16:43.609  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:16:43.609                suites      1      1    n/a      0        0
00:16:43.609                 tests     23     23     23      0        0
00:16:43.609               asserts    152    152    152      0      n/a
00:16:43.609  
00:16:43.609  Elapsed time =    0.904 seconds
00:16:43.868   06:27:00	-- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:16:43.868   06:27:00	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:43.868   06:27:00	-- common/autotest_common.sh@10 -- # set +x
00:16:43.868   06:27:00	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:43.868   06:27:00	-- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT
00:16:43.868   06:27:00	-- target/bdevio.sh@30 -- # nvmftestfini
00:16:43.868   06:27:00	-- nvmf/common.sh@476 -- # nvmfcleanup
00:16:43.868   06:27:00	-- nvmf/common.sh@116 -- # sync
00:16:43.868   06:27:00	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:16:43.868   06:27:00	-- nvmf/common.sh@119 -- # set +e
00:16:43.868   06:27:00	-- nvmf/common.sh@120 -- # for i in {1..20}
00:16:43.868   06:27:00	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:16:43.868  rmmod nvme_tcp
00:16:43.868  rmmod nvme_fabrics
00:16:43.868  rmmod nvme_keyring
00:16:43.868   06:27:00	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:16:43.868   06:27:00	-- nvmf/common.sh@123 -- # set -e
00:16:43.868   06:27:00	-- nvmf/common.sh@124 -- # return 0
00:16:43.868   06:27:00	-- nvmf/common.sh@477 -- # '[' -n 77041 ']'
00:16:43.868   06:27:00	-- nvmf/common.sh@478 -- # killprocess 77041
00:16:43.868   06:27:00	-- common/autotest_common.sh@936 -- # '[' -z 77041 ']'
00:16:43.868   06:27:00	-- common/autotest_common.sh@940 -- # kill -0 77041
00:16:43.868    06:27:00	-- common/autotest_common.sh@941 -- # uname
00:16:43.868   06:27:00	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:16:43.868    06:27:00	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77041
00:16:43.868  killing process with pid 77041
00:16:43.868   06:27:00	-- common/autotest_common.sh@942 -- # process_name=reactor_3
00:16:43.868   06:27:00	-- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']'
00:16:43.868   06:27:00	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 77041'
00:16:43.868   06:27:00	-- common/autotest_common.sh@955 -- # kill 77041
00:16:43.868   06:27:00	-- common/autotest_common.sh@960 -- # wait 77041
00:16:44.127   06:27:01	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:16:44.127   06:27:01	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:16:44.127   06:27:01	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:16:44.127   06:27:01	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:16:44.127   06:27:01	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:16:44.127   06:27:01	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:44.127   06:27:01	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:16:44.127    06:27:01	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:44.127   06:27:01	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:16:44.127  
00:16:44.127  real	0m3.447s
00:16:44.127  user	0m12.368s
00:16:44.127  sys	0m0.950s
00:16:44.127   06:27:01	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:16:44.127   06:27:01	-- common/autotest_common.sh@10 -- # set +x
00:16:44.127  ************************************
00:16:44.127  END TEST nvmf_bdevio
00:16:44.127  ************************************
00:16:44.127   06:27:01	-- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']'
00:16:44.127   06:27:01	-- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages
00:16:44.127   06:27:01	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:16:44.127   06:27:01	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:16:44.127   06:27:01	-- common/autotest_common.sh@10 -- # set +x
00:16:44.387  ************************************
00:16:44.387  START TEST nvmf_bdevio_no_huge
00:16:44.387  ************************************
00:16:44.387   06:27:01	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages
00:16:44.387  * Looking for test storage...
00:16:44.387  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:16:44.387    06:27:01	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:16:44.387     06:27:01	-- common/autotest_common.sh@1690 -- # lcov --version
00:16:44.387     06:27:01	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:16:44.387    06:27:01	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:16:44.387    06:27:01	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:16:44.387    06:27:01	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:16:44.387    06:27:01	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:16:44.387    06:27:01	-- scripts/common.sh@335 -- # IFS=.-:
00:16:44.387    06:27:01	-- scripts/common.sh@335 -- # read -ra ver1
00:16:44.387    06:27:01	-- scripts/common.sh@336 -- # IFS=.-:
00:16:44.387    06:27:01	-- scripts/common.sh@336 -- # read -ra ver2
00:16:44.387    06:27:01	-- scripts/common.sh@337 -- # local 'op=<'
00:16:44.387    06:27:01	-- scripts/common.sh@339 -- # ver1_l=2
00:16:44.387    06:27:01	-- scripts/common.sh@340 -- # ver2_l=1
00:16:44.387    06:27:01	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:16:44.387    06:27:01	-- scripts/common.sh@343 -- # case "$op" in
00:16:44.387    06:27:01	-- scripts/common.sh@344 -- # : 1
00:16:44.387    06:27:01	-- scripts/common.sh@363 -- # (( v = 0 ))
00:16:44.387    06:27:01	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:44.387     06:27:01	-- scripts/common.sh@364 -- # decimal 1
00:16:44.387     06:27:01	-- scripts/common.sh@352 -- # local d=1
00:16:44.387     06:27:01	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:44.387     06:27:01	-- scripts/common.sh@354 -- # echo 1
00:16:44.387    06:27:01	-- scripts/common.sh@364 -- # ver1[v]=1
00:16:44.387     06:27:01	-- scripts/common.sh@365 -- # decimal 2
00:16:44.387     06:27:01	-- scripts/common.sh@352 -- # local d=2
00:16:44.387     06:27:01	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:44.387     06:27:01	-- scripts/common.sh@354 -- # echo 2
00:16:44.387    06:27:01	-- scripts/common.sh@365 -- # ver2[v]=2
00:16:44.387    06:27:01	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:16:44.387    06:27:01	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:16:44.387    06:27:01	-- scripts/common.sh@367 -- # return 0
00:16:44.387    06:27:01	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:44.387    06:27:01	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:16:44.387  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:44.387  		--rc genhtml_branch_coverage=1
00:16:44.387  		--rc genhtml_function_coverage=1
00:16:44.387  		--rc genhtml_legend=1
00:16:44.387  		--rc geninfo_all_blocks=1
00:16:44.387  		--rc geninfo_unexecuted_blocks=1
00:16:44.387  		
00:16:44.387  		'
00:16:44.387    06:27:01	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:16:44.387  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:44.387  		--rc genhtml_branch_coverage=1
00:16:44.387  		--rc genhtml_function_coverage=1
00:16:44.387  		--rc genhtml_legend=1
00:16:44.387  		--rc geninfo_all_blocks=1
00:16:44.387  		--rc geninfo_unexecuted_blocks=1
00:16:44.387  		
00:16:44.387  		'
00:16:44.387    06:27:01	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:16:44.387  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:44.387  		--rc genhtml_branch_coverage=1
00:16:44.387  		--rc genhtml_function_coverage=1
00:16:44.387  		--rc genhtml_legend=1
00:16:44.387  		--rc geninfo_all_blocks=1
00:16:44.387  		--rc geninfo_unexecuted_blocks=1
00:16:44.387  		
00:16:44.387  		'
00:16:44.387    06:27:01	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:16:44.387  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:44.387  		--rc genhtml_branch_coverage=1
00:16:44.387  		--rc genhtml_function_coverage=1
00:16:44.387  		--rc genhtml_legend=1
00:16:44.387  		--rc geninfo_all_blocks=1
00:16:44.387  		--rc geninfo_unexecuted_blocks=1
00:16:44.387  		
00:16:44.387  		'
00:16:44.387   06:27:01	-- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:16:44.387     06:27:01	-- nvmf/common.sh@7 -- # uname -s
00:16:44.387    06:27:01	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:16:44.387    06:27:01	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:16:44.387    06:27:01	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:16:44.387    06:27:01	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:16:44.387    06:27:01	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:16:44.387    06:27:01	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:16:44.387    06:27:01	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:16:44.387    06:27:01	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:16:44.387    06:27:01	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:16:44.387     06:27:01	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:16:44.387    06:27:01	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:16:44.387    06:27:01	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:16:44.388    06:27:01	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:16:44.388    06:27:01	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:16:44.388    06:27:01	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:16:44.388    06:27:01	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:16:44.388     06:27:01	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:16:44.388     06:27:01	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:16:44.388     06:27:01	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:16:44.388      06:27:01	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:44.388      06:27:01	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:44.388      06:27:01	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:44.388      06:27:01	-- paths/export.sh@5 -- # export PATH
00:16:44.388      06:27:01	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:44.388    06:27:01	-- nvmf/common.sh@46 -- # : 0
00:16:44.388    06:27:01	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:16:44.388    06:27:01	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:16:44.388    06:27:01	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:16:44.388    06:27:01	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:16:44.388    06:27:01	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:16:44.388    06:27:01	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:16:44.388    06:27:01	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:16:44.388    06:27:01	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:16:44.388   06:27:01	-- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64
00:16:44.388   06:27:01	-- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:16:44.388   06:27:01	-- target/bdevio.sh@14 -- # nvmftestinit
00:16:44.388   06:27:01	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:16:44.388   06:27:01	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:16:44.388   06:27:01	-- nvmf/common.sh@436 -- # prepare_net_devs
00:16:44.388   06:27:01	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:16:44.388   06:27:01	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:16:44.388   06:27:01	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:44.388   06:27:01	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:16:44.388    06:27:01	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:44.388   06:27:01	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:16:44.388   06:27:01	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:16:44.388   06:27:01	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:16:44.388   06:27:01	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:16:44.388   06:27:01	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:16:44.388   06:27:01	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:16:44.388   06:27:01	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:16:44.388   06:27:01	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:16:44.388   06:27:01	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:16:44.388   06:27:01	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:16:44.388   06:27:01	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:16:44.388   06:27:01	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:16:44.388   06:27:01	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:16:44.388   06:27:01	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:16:44.388   06:27:01	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:16:44.388   06:27:01	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:16:44.388   06:27:01	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:16:44.388   06:27:01	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:16:44.388   06:27:01	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:16:44.388   06:27:01	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:16:44.388  Cannot find device "nvmf_tgt_br"
00:16:44.388   06:27:01	-- nvmf/common.sh@154 -- # true
00:16:44.388   06:27:01	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:16:44.647  Cannot find device "nvmf_tgt_br2"
00:16:44.647   06:27:01	-- nvmf/common.sh@155 -- # true
00:16:44.647   06:27:01	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:16:44.647   06:27:01	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:16:44.647  Cannot find device "nvmf_tgt_br"
00:16:44.647   06:27:01	-- nvmf/common.sh@157 -- # true
00:16:44.647   06:27:01	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:16:44.647  Cannot find device "nvmf_tgt_br2"
00:16:44.647   06:27:01	-- nvmf/common.sh@158 -- # true
00:16:44.647   06:27:01	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:16:44.647   06:27:01	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:16:44.647   06:27:01	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:16:44.647  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:16:44.647   06:27:01	-- nvmf/common.sh@161 -- # true
00:16:44.647   06:27:01	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:16:44.647  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:16:44.647   06:27:01	-- nvmf/common.sh@162 -- # true
00:16:44.647   06:27:01	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:16:44.647   06:27:01	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:16:44.647   06:27:01	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:16:44.647   06:27:01	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:16:44.647   06:27:01	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:16:44.647   06:27:01	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:16:44.647   06:27:01	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:16:44.647   06:27:01	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:16:44.647   06:27:01	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:16:44.647   06:27:01	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:16:44.647   06:27:01	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:16:44.647   06:27:01	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:16:44.647   06:27:01	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:16:44.647   06:27:01	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:16:44.647   06:27:01	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:16:44.647   06:27:01	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:16:44.647   06:27:01	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:16:44.647   06:27:01	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:16:44.647   06:27:01	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:16:44.906   06:27:01	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:16:44.906   06:27:01	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:16:44.906   06:27:01	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:16:44.906   06:27:01	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:16:44.906   06:27:01	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:16:44.906  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:16:44.906  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms
00:16:44.906  
00:16:44.906  --- 10.0.0.2 ping statistics ---
00:16:44.906  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:44.906  rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms
00:16:44.906   06:27:01	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:16:44.906  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:16:44.906  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms
00:16:44.906  
00:16:44.906  --- 10.0.0.3 ping statistics ---
00:16:44.906  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:44.906  rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms
00:16:44.906   06:27:01	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:16:44.906  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:16:44.906  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms
00:16:44.906  
00:16:44.906  --- 10.0.0.1 ping statistics ---
00:16:44.906  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:44.906  rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms
00:16:44.906   06:27:01	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:16:44.906   06:27:01	-- nvmf/common.sh@421 -- # return 0
00:16:44.906   06:27:01	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:16:44.906   06:27:01	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:16:44.906   06:27:01	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:16:44.906   06:27:01	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:16:44.906   06:27:01	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:16:44.906   06:27:01	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:16:44.906   06:27:01	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:16:44.906   06:27:01	-- target/bdevio.sh@16 -- # nvmfappstart -m 0x78
00:16:44.906   06:27:01	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:16:44.906   06:27:01	-- common/autotest_common.sh@722 -- # xtrace_disable
00:16:44.906   06:27:01	-- common/autotest_common.sh@10 -- # set +x
00:16:44.906   06:27:01	-- nvmf/common.sh@469 -- # nvmfpid=77294
00:16:44.906   06:27:01	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78
00:16:44.906   06:27:01	-- nvmf/common.sh@470 -- # waitforlisten 77294
00:16:44.906   06:27:01	-- common/autotest_common.sh@829 -- # '[' -z 77294 ']'
00:16:44.906   06:27:01	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:44.906   06:27:01	-- common/autotest_common.sh@834 -- # local max_retries=100
00:16:44.906  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:44.906   06:27:01	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:44.906   06:27:01	-- common/autotest_common.sh@838 -- # xtrace_disable
00:16:44.906   06:27:01	-- common/autotest_common.sh@10 -- # set +x
00:16:44.906  [2024-12-16 06:27:01.766536] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:16:44.906  [2024-12-16 06:27:01.766619] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ]
00:16:45.165  [2024-12-16 06:27:01.915271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:16:45.165  [2024-12-16 06:27:02.016478] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:16:45.165  [2024-12-16 06:27:02.016611] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:16:45.165  [2024-12-16 06:27:02.016622] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:16:45.165  [2024-12-16 06:27:02.016630] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:16:45.165  [2024-12-16 06:27:02.016769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4
00:16:45.165  [2024-12-16 06:27:02.017352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5
00:16:45.165  [2024-12-16 06:27:02.017475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6
00:16:45.166  [2024-12-16 06:27:02.017705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:16:45.732   06:27:02	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:16:45.732   06:27:02	-- common/autotest_common.sh@862 -- # return 0
00:16:45.732   06:27:02	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:16:45.732   06:27:02	-- common/autotest_common.sh@728 -- # xtrace_disable
00:16:45.732   06:27:02	-- common/autotest_common.sh@10 -- # set +x
00:16:45.991   06:27:02	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:16:45.991   06:27:02	-- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:16:45.991   06:27:02	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:45.991   06:27:02	-- common/autotest_common.sh@10 -- # set +x
00:16:45.991  [2024-12-16 06:27:02.730929] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:16:45.991   06:27:02	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:45.991   06:27:02	-- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:16:45.991   06:27:02	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:45.991   06:27:02	-- common/autotest_common.sh@10 -- # set +x
00:16:45.992  Malloc0
00:16:45.992   06:27:02	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:45.992   06:27:02	-- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:16:45.992   06:27:02	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:45.992   06:27:02	-- common/autotest_common.sh@10 -- # set +x
00:16:45.992   06:27:02	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:45.992   06:27:02	-- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:16:45.992   06:27:02	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:45.992   06:27:02	-- common/autotest_common.sh@10 -- # set +x
00:16:45.992   06:27:02	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:45.992   06:27:02	-- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:16:45.992   06:27:02	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:45.992   06:27:02	-- common/autotest_common.sh@10 -- # set +x
00:16:45.992  [2024-12-16 06:27:02.771140] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:16:45.992   06:27:02	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:45.992   06:27:02	-- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024
00:16:45.992    06:27:02	-- target/bdevio.sh@24 -- # gen_nvmf_target_json
00:16:45.992    06:27:02	-- nvmf/common.sh@520 -- # config=()
00:16:45.992    06:27:02	-- nvmf/common.sh@520 -- # local subsystem config
00:16:45.992    06:27:02	-- nvmf/common.sh@522 -- # for subsystem in "${@:-1}"
00:16:45.992    06:27:02	-- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF
00:16:45.992  {
00:16:45.992    "params": {
00:16:45.992      "name": "Nvme$subsystem",
00:16:45.992      "trtype": "$TEST_TRANSPORT",
00:16:45.992      "traddr": "$NVMF_FIRST_TARGET_IP",
00:16:45.992      "adrfam": "ipv4",
00:16:45.992      "trsvcid": "$NVMF_PORT",
00:16:45.992      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:16:45.992      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:16:45.992      "hdgst": ${hdgst:-false},
00:16:45.992      "ddgst": ${ddgst:-false}
00:16:45.992    },
00:16:45.992    "method": "bdev_nvme_attach_controller"
00:16:45.992  }
00:16:45.992  EOF
00:16:45.992  )")
00:16:45.992     06:27:02	-- nvmf/common.sh@542 -- # cat
00:16:45.992    06:27:02	-- nvmf/common.sh@544 -- # jq .
00:16:45.992     06:27:02	-- nvmf/common.sh@545 -- # IFS=,
00:16:45.992     06:27:02	-- nvmf/common.sh@546 -- # printf '%s\n' '{
00:16:45.992    "params": {
00:16:45.992      "name": "Nvme1",
00:16:45.992      "trtype": "tcp",
00:16:45.992      "traddr": "10.0.0.2",
00:16:45.992      "adrfam": "ipv4",
00:16:45.992      "trsvcid": "4420",
00:16:45.992      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:16:45.992      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:16:45.992      "hdgst": false,
00:16:45.992      "ddgst": false
00:16:45.992    },
00:16:45.992    "method": "bdev_nvme_attach_controller"
00:16:45.992  }'
00:16:45.992  [2024-12-16 06:27:02.834979] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:16:45.992  [2024-12-16 06:27:02.835061] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid77344 ]
00:16:46.250  [2024-12-16 06:27:02.981970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:16:46.250  [2024-12-16 06:27:03.145923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:16:46.250  [2024-12-16 06:27:03.146051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:16:46.250  [2024-12-16 06:27:03.146069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:16:46.509  [2024-12-16 06:27:03.322746] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another.
00:16:46.509  [2024-12-16 06:27:03.323088] rpc.c:  90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock
00:16:46.509  I/O targets:
00:16:46.509    Nvme1n1: 131072 blocks of 512 bytes (64 MiB)
00:16:46.509  
00:16:46.509  
00:16:46.509       CUnit - A unit testing framework for C - Version 2.1-3
00:16:46.509       http://cunit.sourceforge.net/
00:16:46.509  
00:16:46.509  
00:16:46.509  Suite: bdevio tests on: Nvme1n1
00:16:46.509    Test: blockdev write read block ...passed
00:16:46.509    Test: blockdev write zeroes read block ...passed
00:16:46.509    Test: blockdev write zeroes read no split ...passed
00:16:46.509    Test: blockdev write zeroes read split ...passed
00:16:46.509    Test: blockdev write zeroes read split partial ...passed
00:16:46.509    Test: blockdev reset ...[2024-12-16 06:27:03.453447] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller
00:16:46.509  [2024-12-16 06:27:03.453716] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c51c0 (9): Bad file descriptor
00:16:46.509  [2024-12-16 06:27:03.472126] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:16:46.509  passed
00:16:46.509    Test: blockdev write read 8 blocks ...passed
00:16:46.509    Test: blockdev write read size > 128k ...passed
00:16:46.509    Test: blockdev write read invalid size ...passed
00:16:46.768    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:16:46.768    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:16:46.768    Test: blockdev write read max offset ...passed
00:16:46.768    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:16:46.768    Test: blockdev writev readv 8 blocks ...passed
00:16:46.768    Test: blockdev writev readv 30 x 1block ...passed
00:16:46.768    Test: blockdev writev readv block ...passed
00:16:46.768    Test: blockdev writev readv size > 128k ...passed
00:16:46.768    Test: blockdev writev readv size > 128k in two iovs ...passed
00:16:46.768    Test: blockdev comparev and writev ...[2024-12-16 06:27:03.644928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:16:46.768  [2024-12-16 06:27:03.644974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:16:46.768  [2024-12-16 06:27:03.645004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:16:46.768  [2024-12-16 06:27:03.645014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:16:46.768  [2024-12-16 06:27:03.645421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:16:46.768  [2024-12-16 06:27:03.645442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:16:46.768  [2024-12-16 06:27:03.645457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:16:46.768  [2024-12-16 06:27:03.645466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:16:46.768  [2024-12-16 06:27:03.645865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:16:46.768  [2024-12-16 06:27:03.645886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:16:46.768  [2024-12-16 06:27:03.645902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:16:46.768  [2024-12-16 06:27:03.645911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:16:46.768  [2024-12-16 06:27:03.646219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:16:46.768  [2024-12-16 06:27:03.646234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:16:46.768  [2024-12-16 06:27:03.646248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:16:46.768  [2024-12-16 06:27:03.646256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:16:46.768  passed
00:16:46.768    Test: blockdev nvme passthru rw ...passed
00:16:46.768    Test: blockdev nvme passthru vendor specific ...[2024-12-16 06:27:03.731052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:16:46.768  [2024-12-16 06:27:03.731088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:16:46.768  [2024-12-16 06:27:03.731199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:16:46.768  [2024-12-16 06:27:03.731213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:16:46.768  [2024-12-16 06:27:03.731333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:16:46.768  [2024-12-16 06:27:03.731347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:16:46.768  passed
00:16:46.768    Test: blockdev nvme admin passthru ...[2024-12-16 06:27:03.731475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:16:46.768  [2024-12-16 06:27:03.731489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:16:47.027  passed
00:16:47.027    Test: blockdev copy ...passed
00:16:47.027  
00:16:47.027  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:16:47.027                suites      1      1    n/a      0        0
00:16:47.027                 tests     23     23     23      0        0
00:16:47.027               asserts    152    152    152      0      n/a
00:16:47.027  
00:16:47.027  Elapsed time =    0.929 seconds
00:16:47.285   06:27:04	-- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:16:47.285   06:27:04	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:47.285   06:27:04	-- common/autotest_common.sh@10 -- # set +x
00:16:47.544   06:27:04	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:47.544   06:27:04	-- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT
00:16:47.544   06:27:04	-- target/bdevio.sh@30 -- # nvmftestfini
00:16:47.544   06:27:04	-- nvmf/common.sh@476 -- # nvmfcleanup
00:16:47.544   06:27:04	-- nvmf/common.sh@116 -- # sync
00:16:47.544   06:27:04	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:16:47.544   06:27:04	-- nvmf/common.sh@119 -- # set +e
00:16:47.544   06:27:04	-- nvmf/common.sh@120 -- # for i in {1..20}
00:16:47.544   06:27:04	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:16:47.544  rmmod nvme_tcp
00:16:47.544  rmmod nvme_fabrics
00:16:47.544  rmmod nvme_keyring
00:16:47.544   06:27:04	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:16:47.544   06:27:04	-- nvmf/common.sh@123 -- # set -e
00:16:47.544   06:27:04	-- nvmf/common.sh@124 -- # return 0
00:16:47.544   06:27:04	-- nvmf/common.sh@477 -- # '[' -n 77294 ']'
00:16:47.544   06:27:04	-- nvmf/common.sh@478 -- # killprocess 77294
00:16:47.544   06:27:04	-- common/autotest_common.sh@936 -- # '[' -z 77294 ']'
00:16:47.544   06:27:04	-- common/autotest_common.sh@940 -- # kill -0 77294
00:16:47.544    06:27:04	-- common/autotest_common.sh@941 -- # uname
00:16:47.544   06:27:04	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:16:47.544    06:27:04	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77294
00:16:47.544   06:27:04	-- common/autotest_common.sh@942 -- # process_name=reactor_3
00:16:47.544   06:27:04	-- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']'
00:16:47.544   06:27:04	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 77294'
00:16:47.544  killing process with pid 77294
00:16:47.544   06:27:04	-- common/autotest_common.sh@955 -- # kill 77294
00:16:47.544   06:27:04	-- common/autotest_common.sh@960 -- # wait 77294
00:16:47.803   06:27:04	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:16:47.803   06:27:04	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:16:47.803   06:27:04	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:16:47.803   06:27:04	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:16:47.803   06:27:04	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:16:47.803   06:27:04	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:47.803   06:27:04	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:16:47.803    06:27:04	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:48.062   06:27:04	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:16:48.062  
00:16:48.062  real	0m3.697s
00:16:48.062  user	0m13.122s
00:16:48.062  sys	0m1.367s
00:16:48.062   06:27:04	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:16:48.062   06:27:04	-- common/autotest_common.sh@10 -- # set +x
00:16:48.062  ************************************
00:16:48.062  END TEST nvmf_bdevio_no_huge
00:16:48.062  ************************************
00:16:48.062   06:27:04	-- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp
00:16:48.062   06:27:04	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:16:48.062   06:27:04	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:16:48.062   06:27:04	-- common/autotest_common.sh@10 -- # set +x
00:16:48.062  ************************************
00:16:48.062  START TEST nvmf_tls
00:16:48.062  ************************************
00:16:48.062   06:27:04	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp
00:16:48.062  * Looking for test storage...
00:16:48.062  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:16:48.062    06:27:04	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:16:48.062     06:27:04	-- common/autotest_common.sh@1690 -- # lcov --version
00:16:48.062     06:27:04	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:16:48.062    06:27:05	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:16:48.062    06:27:05	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:16:48.062    06:27:05	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:16:48.062    06:27:05	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:16:48.062    06:27:05	-- scripts/common.sh@335 -- # IFS=.-:
00:16:48.062    06:27:05	-- scripts/common.sh@335 -- # read -ra ver1
00:16:48.062    06:27:05	-- scripts/common.sh@336 -- # IFS=.-:
00:16:48.062    06:27:05	-- scripts/common.sh@336 -- # read -ra ver2
00:16:48.062    06:27:05	-- scripts/common.sh@337 -- # local 'op=<'
00:16:48.062    06:27:05	-- scripts/common.sh@339 -- # ver1_l=2
00:16:48.062    06:27:05	-- scripts/common.sh@340 -- # ver2_l=1
00:16:48.062    06:27:05	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:16:48.062    06:27:05	-- scripts/common.sh@343 -- # case "$op" in
00:16:48.062    06:27:05	-- scripts/common.sh@344 -- # : 1
00:16:48.062    06:27:05	-- scripts/common.sh@363 -- # (( v = 0 ))
00:16:48.062    06:27:05	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:48.062     06:27:05	-- scripts/common.sh@364 -- # decimal 1
00:16:48.062     06:27:05	-- scripts/common.sh@352 -- # local d=1
00:16:48.062     06:27:05	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:48.062     06:27:05	-- scripts/common.sh@354 -- # echo 1
00:16:48.062    06:27:05	-- scripts/common.sh@364 -- # ver1[v]=1
00:16:48.062     06:27:05	-- scripts/common.sh@365 -- # decimal 2
00:16:48.062     06:27:05	-- scripts/common.sh@352 -- # local d=2
00:16:48.062     06:27:05	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:48.062     06:27:05	-- scripts/common.sh@354 -- # echo 2
00:16:48.062    06:27:05	-- scripts/common.sh@365 -- # ver2[v]=2
00:16:48.062    06:27:05	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:16:48.062    06:27:05	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:16:48.062    06:27:05	-- scripts/common.sh@367 -- # return 0
00:16:48.062    06:27:05	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:48.062    06:27:05	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:16:48.062  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:48.062  		--rc genhtml_branch_coverage=1
00:16:48.062  		--rc genhtml_function_coverage=1
00:16:48.062  		--rc genhtml_legend=1
00:16:48.062  		--rc geninfo_all_blocks=1
00:16:48.062  		--rc geninfo_unexecuted_blocks=1
00:16:48.062  		
00:16:48.062  		'
00:16:48.062    06:27:05	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:16:48.062  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:48.062  		--rc genhtml_branch_coverage=1
00:16:48.062  		--rc genhtml_function_coverage=1
00:16:48.062  		--rc genhtml_legend=1
00:16:48.062  		--rc geninfo_all_blocks=1
00:16:48.062  		--rc geninfo_unexecuted_blocks=1
00:16:48.062  		
00:16:48.062  		'
00:16:48.062    06:27:05	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:16:48.062  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:48.062  		--rc genhtml_branch_coverage=1
00:16:48.062  		--rc genhtml_function_coverage=1
00:16:48.063  		--rc genhtml_legend=1
00:16:48.063  		--rc geninfo_all_blocks=1
00:16:48.063  		--rc geninfo_unexecuted_blocks=1
00:16:48.063  		
00:16:48.063  		'
00:16:48.063    06:27:05	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:16:48.063  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:48.063  		--rc genhtml_branch_coverage=1
00:16:48.063  		--rc genhtml_function_coverage=1
00:16:48.063  		--rc genhtml_legend=1
00:16:48.063  		--rc geninfo_all_blocks=1
00:16:48.063  		--rc geninfo_unexecuted_blocks=1
00:16:48.063  		
00:16:48.063  		'
00:16:48.063   06:27:05	-- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:16:48.063     06:27:05	-- nvmf/common.sh@7 -- # uname -s
00:16:48.063    06:27:05	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:16:48.063    06:27:05	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:16:48.063    06:27:05	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:16:48.063    06:27:05	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:16:48.063    06:27:05	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:16:48.063    06:27:05	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:16:48.063    06:27:05	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:16:48.063    06:27:05	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:16:48.063    06:27:05	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:16:48.063     06:27:05	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:16:48.063    06:27:05	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:16:48.063    06:27:05	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:16:48.063    06:27:05	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:16:48.063    06:27:05	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:16:48.063    06:27:05	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:16:48.063    06:27:05	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:16:48.063     06:27:05	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:16:48.063     06:27:05	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:16:48.063     06:27:05	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:16:48.063      06:27:05	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:48.328      06:27:05	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:48.328      06:27:05	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:48.328      06:27:05	-- paths/export.sh@5 -- # export PATH
00:16:48.328      06:27:05	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:48.328    06:27:05	-- nvmf/common.sh@46 -- # : 0
00:16:48.328    06:27:05	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:16:48.328    06:27:05	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:16:48.328    06:27:05	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:16:48.328    06:27:05	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:16:48.328    06:27:05	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:16:48.328    06:27:05	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:16:48.328    06:27:05	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:16:48.328    06:27:05	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:16:48.328   06:27:05	-- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:16:48.328   06:27:05	-- target/tls.sh@71 -- # nvmftestinit
00:16:48.328   06:27:05	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:16:48.328   06:27:05	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:16:48.328   06:27:05	-- nvmf/common.sh@436 -- # prepare_net_devs
00:16:48.328   06:27:05	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:16:48.328   06:27:05	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:16:48.328   06:27:05	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:48.328   06:27:05	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:16:48.328    06:27:05	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:48.328   06:27:05	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:16:48.328   06:27:05	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:16:48.328   06:27:05	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:16:48.328   06:27:05	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:16:48.328   06:27:05	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:16:48.328   06:27:05	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:16:48.328   06:27:05	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:16:48.328   06:27:05	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:16:48.328   06:27:05	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:16:48.328   06:27:05	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:16:48.328   06:27:05	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:16:48.328   06:27:05	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:16:48.328   06:27:05	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:16:48.328   06:27:05	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:16:48.328   06:27:05	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:16:48.328   06:27:05	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:16:48.328   06:27:05	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:16:48.328   06:27:05	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:16:48.328   06:27:05	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:16:48.328   06:27:05	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:16:48.328  Cannot find device "nvmf_tgt_br"
00:16:48.328   06:27:05	-- nvmf/common.sh@154 -- # true
00:16:48.328   06:27:05	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:16:48.328  Cannot find device "nvmf_tgt_br2"
00:16:48.328   06:27:05	-- nvmf/common.sh@155 -- # true
00:16:48.328   06:27:05	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:16:48.328   06:27:05	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:16:48.328  Cannot find device "nvmf_tgt_br"
00:16:48.328   06:27:05	-- nvmf/common.sh@157 -- # true
00:16:48.328   06:27:05	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:16:48.328  Cannot find device "nvmf_tgt_br2"
00:16:48.328   06:27:05	-- nvmf/common.sh@158 -- # true
00:16:48.328   06:27:05	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:16:48.328   06:27:05	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:16:48.328   06:27:05	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:16:48.328  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:16:48.328   06:27:05	-- nvmf/common.sh@161 -- # true
00:16:48.328   06:27:05	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:16:48.328  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:16:48.328   06:27:05	-- nvmf/common.sh@162 -- # true
00:16:48.328   06:27:05	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:16:48.328   06:27:05	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:16:48.328   06:27:05	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:16:48.328   06:27:05	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:16:48.329   06:27:05	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:16:48.329   06:27:05	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:16:48.329   06:27:05	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:16:48.329   06:27:05	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:16:48.329   06:27:05	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:16:48.329   06:27:05	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:16:48.329   06:27:05	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:16:48.329   06:27:05	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:16:48.329   06:27:05	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:16:48.329   06:27:05	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:16:48.329   06:27:05	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:16:48.329   06:27:05	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:16:48.329   06:27:05	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:16:48.329   06:27:05	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:16:48.329   06:27:05	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:16:48.589   06:27:05	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:16:48.589   06:27:05	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:16:48.589   06:27:05	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:16:48.589   06:27:05	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:16:48.589   06:27:05	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:16:48.589  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:16:48.589  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms
00:16:48.589  
00:16:48.589  --- 10.0.0.2 ping statistics ---
00:16:48.589  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:48.589  rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms
00:16:48.589   06:27:05	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:16:48.589  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:16:48.589  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms
00:16:48.589  
00:16:48.589  --- 10.0.0.3 ping statistics ---
00:16:48.589  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:48.589  rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms
00:16:48.589   06:27:05	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:16:48.589  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:16:48.589  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms
00:16:48.589  
00:16:48.589  --- 10.0.0.1 ping statistics ---
00:16:48.589  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:48.589  rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms
00:16:48.589   06:27:05	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:16:48.589   06:27:05	-- nvmf/common.sh@421 -- # return 0
00:16:48.589   06:27:05	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:16:48.589   06:27:05	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:16:48.589   06:27:05	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:16:48.589   06:27:05	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:16:48.589   06:27:05	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:16:48.589   06:27:05	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:16:48.589   06:27:05	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:16:48.589   06:27:05	-- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc
00:16:48.589   06:27:05	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:16:48.589   06:27:05	-- common/autotest_common.sh@722 -- # xtrace_disable
00:16:48.589   06:27:05	-- common/autotest_common.sh@10 -- # set +x
00:16:48.589   06:27:05	-- nvmf/common.sh@469 -- # nvmfpid=77541
00:16:48.589   06:27:05	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc
00:16:48.589   06:27:05	-- nvmf/common.sh@470 -- # waitforlisten 77541
00:16:48.589   06:27:05	-- common/autotest_common.sh@829 -- # '[' -z 77541 ']'
00:16:48.589   06:27:05	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:48.589   06:27:05	-- common/autotest_common.sh@834 -- # local max_retries=100
00:16:48.589   06:27:05	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:48.589  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:48.589   06:27:05	-- common/autotest_common.sh@838 -- # xtrace_disable
00:16:48.589   06:27:05	-- common/autotest_common.sh@10 -- # set +x
00:16:48.589  [2024-12-16 06:27:05.432531] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:16:48.589  [2024-12-16 06:27:05.433190] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:16:48.849  [2024-12-16 06:27:05.576820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:48.849  [2024-12-16 06:27:05.700565] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:16:48.849  [2024-12-16 06:27:05.700760] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:16:48.849  [2024-12-16 06:27:05.700777] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:16:48.849  [2024-12-16 06:27:05.700788] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:16:48.849  [2024-12-16 06:27:05.700830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:16:49.786   06:27:06	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:16:49.786   06:27:06	-- common/autotest_common.sh@862 -- # return 0
00:16:49.786   06:27:06	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:16:49.786   06:27:06	-- common/autotest_common.sh@728 -- # xtrace_disable
00:16:49.786   06:27:06	-- common/autotest_common.sh@10 -- # set +x
00:16:49.786   06:27:06	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:16:49.786   06:27:06	-- target/tls.sh@74 -- # '[' tcp '!=' tcp ']'
00:16:49.786   06:27:06	-- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl
00:16:49.786  true
00:16:49.786    06:27:06	-- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl
00:16:49.786    06:27:06	-- target/tls.sh@82 -- # jq -r .tls_version
00:16:50.045   06:27:06	-- target/tls.sh@82 -- # version=0
00:16:50.045   06:27:06	-- target/tls.sh@83 -- # [[ 0 != \0 ]]
00:16:50.046   06:27:06	-- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13
00:16:50.304    06:27:07	-- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl
00:16:50.304    06:27:07	-- target/tls.sh@90 -- # jq -r .tls_version
00:16:50.563   06:27:07	-- target/tls.sh@90 -- # version=13
00:16:50.563   06:27:07	-- target/tls.sh@91 -- # [[ 13 != \1\3 ]]
00:16:50.563   06:27:07	-- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7
00:16:50.822    06:27:07	-- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl
00:16:50.822    06:27:07	-- target/tls.sh@98 -- # jq -r .tls_version
00:16:51.080   06:27:07	-- target/tls.sh@98 -- # version=7
00:16:51.080   06:27:07	-- target/tls.sh@99 -- # [[ 7 != \7 ]]
00:16:51.080    06:27:07	-- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl
00:16:51.080    06:27:07	-- target/tls.sh@105 -- # jq -r .enable_ktls
00:16:51.339   06:27:08	-- target/tls.sh@105 -- # ktls=false
00:16:51.339   06:27:08	-- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]]
00:16:51.339   06:27:08	-- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls
00:16:51.598    06:27:08	-- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl
00:16:51.598    06:27:08	-- target/tls.sh@113 -- # jq -r .enable_ktls
00:16:51.857   06:27:08	-- target/tls.sh@113 -- # ktls=true
00:16:51.857   06:27:08	-- target/tls.sh@114 -- # [[ true != \t\r\u\e ]]
00:16:51.857   06:27:08	-- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls
00:16:52.119    06:27:08	-- target/tls.sh@121 -- # jq -r .enable_ktls
00:16:52.119    06:27:08	-- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl
00:16:52.414   06:27:09	-- target/tls.sh@121 -- # ktls=false
00:16:52.414   06:27:09	-- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]]
00:16:52.414    06:27:09	-- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff
00:16:52.414    06:27:09	-- target/tls.sh@49 -- # local key hash crc
00:16:52.414    06:27:09	-- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff
00:16:52.414    06:27:09	-- target/tls.sh@51 -- # hash=01
00:16:52.414     06:27:09	-- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff
00:16:52.414     06:27:09	-- target/tls.sh@52 -- # gzip -1 -c
00:16:52.414     06:27:09	-- target/tls.sh@52 -- # tail -c8
00:16:52.414     06:27:09	-- target/tls.sh@52 -- # head -c 4
00:16:52.414    06:27:09	-- target/tls.sh@52 -- # crc='p$H�'
00:16:52.414     06:27:09	-- target/tls.sh@54 -- # base64 /dev/fd/62
00:16:52.414      06:27:09	-- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�'
00:16:52.414    06:27:09	-- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ:
00:16:52.414   06:27:09	-- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ:
00:16:52.414    06:27:09	-- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100
00:16:52.414    06:27:09	-- target/tls.sh@49 -- # local key hash crc
00:16:52.414    06:27:09	-- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100
00:16:52.414    06:27:09	-- target/tls.sh@51 -- # hash=01
00:16:52.414     06:27:09	-- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100
00:16:52.414     06:27:09	-- target/tls.sh@52 -- # gzip -1 -c
00:16:52.414     06:27:09	-- target/tls.sh@52 -- # tail -c8
00:16:52.414     06:27:09	-- target/tls.sh@52 -- # head -c 4
00:16:52.414    06:27:09	-- target/tls.sh@52 -- # crc=$'_\006o\330'
00:16:52.414     06:27:09	-- target/tls.sh@54 -- # base64 /dev/fd/62
00:16:52.414      06:27:09	-- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330'
00:16:52.414    06:27:09	-- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y:
00:16:52.414   06:27:09	-- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y:
00:16:52.414   06:27:09	-- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt
00:16:52.414   06:27:09	-- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt
00:16:52.414   06:27:09	-- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ:
00:16:52.414   06:27:09	-- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y:
00:16:52.414   06:27:09	-- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt
00:16:52.414   06:27:09	-- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt
00:16:52.414   06:27:09	-- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13
00:16:52.679   06:27:09	-- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init
00:16:52.939   06:27:09	-- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt
00:16:52.939   06:27:09	-- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt
00:16:52.939   06:27:09	-- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o
00:16:53.198  [2024-12-16 06:27:09.985909] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:16:53.198   06:27:10	-- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10
00:16:53.458   06:27:10	-- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k
00:16:53.717  [2024-12-16 06:27:10.465965] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:16:53.717  [2024-12-16 06:27:10.466201] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:16:53.717   06:27:10	-- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0
00:16:53.717  malloc0
00:16:53.717   06:27:10	-- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1
00:16:53.977   06:27:10	-- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt
00:16:54.236   06:27:11	-- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt
00:17:06.440  Initializing NVMe Controllers
00:17:06.440  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:17:06.440  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:17:06.440  Initialization complete. Launching workers.
00:17:06.440  ========================================================
00:17:06.440                                                                                                               Latency(us)
00:17:06.440  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:17:06.440  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:   11905.45      46.51    5376.62    1576.62    7514.48
00:17:06.440  ========================================================
00:17:06.440  Total                                                                    :   11905.45      46.51    5376.62    1576.62    7514.48
00:17:06.440  
00:17:06.440   06:27:21	-- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt
00:17:06.440   06:27:21	-- target/tls.sh@22 -- # local subnqn hostnqn psk
00:17:06.440   06:27:21	-- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1
00:17:06.440   06:27:21	-- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1
00:17:06.440   06:27:21	-- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt'
00:17:06.440   06:27:21	-- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:17:06.440   06:27:21	-- target/tls.sh@28 -- # bdevperf_pid=77904
00:17:06.440   06:27:21	-- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:17:06.440   06:27:21	-- target/tls.sh@31 -- # waitforlisten 77904 /var/tmp/bdevperf.sock
00:17:06.440   06:27:21	-- common/autotest_common.sh@829 -- # '[' -z 77904 ']'
00:17:06.440   06:27:21	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:17:06.440   06:27:21	-- common/autotest_common.sh@834 -- # local max_retries=100
00:17:06.440   06:27:21	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:17:06.440  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:17:06.440   06:27:21	-- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:17:06.440   06:27:21	-- common/autotest_common.sh@838 -- # xtrace_disable
00:17:06.440   06:27:21	-- common/autotest_common.sh@10 -- # set +x
00:17:06.440  [2024-12-16 06:27:21.327368] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:17:06.440  [2024-12-16 06:27:21.327461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77904 ]
00:17:06.440  [2024-12-16 06:27:21.467922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:06.440  [2024-12-16 06:27:21.564595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:17:06.440   06:27:22	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:17:06.440   06:27:22	-- common/autotest_common.sh@862 -- # return 0
00:17:06.440   06:27:22	-- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt
00:17:06.440  [2024-12-16 06:27:22.469639] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:17:06.440  TLSTESTn1
00:17:06.440   06:27:22	-- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests
00:17:06.440  Running I/O for 10 seconds...
00:17:16.417  
00:17:16.417                                                                                                  Latency(us)
00:17:16.417  
[2024-12-16T06:27:33.393Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:17:16.417  
[2024-12-16T06:27:33.393Z]  Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:17:16.417  	 Verification LBA range: start 0x0 length 0x2000
00:17:16.417  	 TLSTESTn1           :      10.01    6647.31      25.97       0.00     0.00   19226.03    3961.95   21924.77
00:17:16.417  
[2024-12-16T06:27:33.393Z]  ===================================================================================================================
00:17:16.417  
[2024-12-16T06:27:33.393Z]  Total                       :               6647.31      25.97       0.00     0.00   19226.03    3961.95   21924.77
00:17:16.417  0
00:17:16.417   06:27:32	-- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:17:16.417   06:27:32	-- target/tls.sh@45 -- # killprocess 77904
00:17:16.417   06:27:32	-- common/autotest_common.sh@936 -- # '[' -z 77904 ']'
00:17:16.417   06:27:32	-- common/autotest_common.sh@940 -- # kill -0 77904
00:17:16.417    06:27:32	-- common/autotest_common.sh@941 -- # uname
00:17:16.417   06:27:32	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:17:16.417    06:27:32	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77904
00:17:16.417  killing process with pid 77904
00:17:16.417  Received shutdown signal, test time was about 10.000000 seconds
00:17:16.417  
00:17:16.417                                                                                                  Latency(us)
00:17:16.417  
[2024-12-16T06:27:33.393Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:17:16.417  
[2024-12-16T06:27:33.393Z]  ===================================================================================================================
00:17:16.417  
[2024-12-16T06:27:33.394Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:17:16.418   06:27:32	-- common/autotest_common.sh@942 -- # process_name=reactor_2
00:17:16.418   06:27:32	-- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']'
00:17:16.418   06:27:32	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 77904'
00:17:16.418   06:27:32	-- common/autotest_common.sh@955 -- # kill 77904
00:17:16.418   06:27:32	-- common/autotest_common.sh@960 -- # wait 77904
00:17:16.418   06:27:32	-- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt
00:17:16.418   06:27:32	-- common/autotest_common.sh@650 -- # local es=0
00:17:16.418   06:27:32	-- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt
00:17:16.418   06:27:32	-- common/autotest_common.sh@638 -- # local arg=run_bdevperf
00:17:16.418   06:27:32	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:16.418    06:27:32	-- common/autotest_common.sh@642 -- # type -t run_bdevperf
00:17:16.418   06:27:32	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:16.418   06:27:32	-- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt
00:17:16.418   06:27:32	-- target/tls.sh@22 -- # local subnqn hostnqn psk
00:17:16.418   06:27:32	-- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1
00:17:16.418   06:27:32	-- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1
00:17:16.418   06:27:32	-- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt'
00:17:16.418   06:27:32	-- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:17:16.418   06:27:32	-- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:17:16.418   06:27:32	-- target/tls.sh@28 -- # bdevperf_pid=78057
00:17:16.418   06:27:32	-- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:17:16.418   06:27:32	-- target/tls.sh@31 -- # waitforlisten 78057 /var/tmp/bdevperf.sock
00:17:16.418   06:27:32	-- common/autotest_common.sh@829 -- # '[' -z 78057 ']'
00:17:16.418   06:27:32	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:17:16.418   06:27:32	-- common/autotest_common.sh@834 -- # local max_retries=100
00:17:16.418   06:27:32	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:17:16.418  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:17:16.418   06:27:32	-- common/autotest_common.sh@838 -- # xtrace_disable
00:17:16.418   06:27:32	-- common/autotest_common.sh@10 -- # set +x
00:17:16.418  [2024-12-16 06:27:33.028779] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:17:16.418  [2024-12-16 06:27:33.029063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78057 ]
00:17:16.418  [2024-12-16 06:27:33.157279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:16.418  [2024-12-16 06:27:33.232326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:17:17.354   06:27:34	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:17:17.354   06:27:34	-- common/autotest_common.sh@862 -- # return 0
00:17:17.354   06:27:34	-- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt
00:17:17.354  [2024-12-16 06:27:34.250105] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:17:17.354  [2024-12-16 06:27:34.255102] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected
00:17:17.354  [2024-12-16 06:27:34.255688] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5cf3d0 (107): Transport endpoint is not connected
00:17:17.354  [2024-12-16 06:27:34.256668] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5cf3d0 (9): Bad file descriptor
00:17:17.354  [2024-12-16 06:27:34.257663] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state
00:17:17.354  [2024-12-16 06:27:34.257828] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2
00:17:17.354  [2024-12-16 06:27:34.257845] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state.
00:17:17.354  2024/12/16 06:27:34 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters
00:17:17.354  request:
00:17:17.354  {
00:17:17.354    "method": "bdev_nvme_attach_controller",
00:17:17.354    "params": {
00:17:17.354      "name": "TLSTEST",
00:17:17.354      "trtype": "tcp",
00:17:17.354      "traddr": "10.0.0.2",
00:17:17.354      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:17:17.354      "adrfam": "ipv4",
00:17:17.354      "trsvcid": "4420",
00:17:17.354      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:17:17.354      "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt"
00:17:17.354    }
00:17:17.354  }
00:17:17.354  Got JSON-RPC error response
00:17:17.354  GoRPCClient: error on JSON-RPC call
00:17:17.354   06:27:34	-- target/tls.sh@36 -- # killprocess 78057
00:17:17.354   06:27:34	-- common/autotest_common.sh@936 -- # '[' -z 78057 ']'
00:17:17.354   06:27:34	-- common/autotest_common.sh@940 -- # kill -0 78057
00:17:17.354    06:27:34	-- common/autotest_common.sh@941 -- # uname
00:17:17.354   06:27:34	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:17:17.354    06:27:34	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78057
00:17:17.354   06:27:34	-- common/autotest_common.sh@942 -- # process_name=reactor_2
00:17:17.354   06:27:34	-- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']'
00:17:17.354  killing process with pid 78057
00:17:17.354   06:27:34	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 78057'
00:17:17.354   06:27:34	-- common/autotest_common.sh@955 -- # kill 78057
00:17:17.355  Received shutdown signal, test time was about 10.000000 seconds
00:17:17.355  
00:17:17.355                                                                                                  Latency(us)
00:17:17.355  
[2024-12-16T06:27:34.331Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:17:17.355  
[2024-12-16T06:27:34.331Z]  ===================================================================================================================
00:17:17.355  
[2024-12-16T06:27:34.331Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:17:17.355   06:27:34	-- common/autotest_common.sh@960 -- # wait 78057
00:17:17.614   06:27:34	-- target/tls.sh@37 -- # return 1
00:17:17.614   06:27:34	-- common/autotest_common.sh@653 -- # es=1
00:17:17.614   06:27:34	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:17:17.614   06:27:34	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:17:17.614   06:27:34	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:17:17.614   06:27:34	-- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt
00:17:17.614   06:27:34	-- common/autotest_common.sh@650 -- # local es=0
00:17:17.614   06:27:34	-- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt
00:17:17.614   06:27:34	-- common/autotest_common.sh@638 -- # local arg=run_bdevperf
00:17:17.614   06:27:34	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:17.614    06:27:34	-- common/autotest_common.sh@642 -- # type -t run_bdevperf
00:17:17.614   06:27:34	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:17.614   06:27:34	-- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt
00:17:17.614   06:27:34	-- target/tls.sh@22 -- # local subnqn hostnqn psk
00:17:17.614   06:27:34	-- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1
00:17:17.614   06:27:34	-- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2
00:17:17.614   06:27:34	-- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt'
00:17:17.614   06:27:34	-- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:17:17.614   06:27:34	-- target/tls.sh@28 -- # bdevperf_pid=78097
00:17:17.614   06:27:34	-- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:17:17.614   06:27:34	-- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:17:17.614   06:27:34	-- target/tls.sh@31 -- # waitforlisten 78097 /var/tmp/bdevperf.sock
00:17:17.614   06:27:34	-- common/autotest_common.sh@829 -- # '[' -z 78097 ']'
00:17:17.614   06:27:34	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:17:17.614   06:27:34	-- common/autotest_common.sh@834 -- # local max_retries=100
00:17:17.614  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:17:17.614   06:27:34	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:17:17.614   06:27:34	-- common/autotest_common.sh@838 -- # xtrace_disable
00:17:17.614   06:27:34	-- common/autotest_common.sh@10 -- # set +x
00:17:17.614  [2024-12-16 06:27:34.587117] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:17:17.614  [2024-12-16 06:27:34.587237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78097 ]
00:17:17.873  [2024-12-16 06:27:34.719209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:17.873  [2024-12-16 06:27:34.808316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:17:18.809   06:27:35	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:17:18.809   06:27:35	-- common/autotest_common.sh@862 -- # return 0
00:17:18.809   06:27:35	-- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt
00:17:18.809  [2024-12-16 06:27:35.697124] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:17:18.809  [2024-12-16 06:27:35.702629] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1
00:17:18.809  [2024-12-16 06:27:35.702668] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1
00:17:18.809  [2024-12-16 06:27:35.702719] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected
00:17:18.809  [2024-12-16 06:27:35.703550] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17413d0 (107): Transport endpoint is not connected
00:17:18.809  [2024-12-16 06:27:35.704540] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17413d0 (9): Bad file descriptor
00:17:18.809  [2024-12-16 06:27:35.705537] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state
00:17:18.809  [2024-12-16 06:27:35.705584] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2
00:17:18.809  [2024-12-16 06:27:35.705596] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state.
00:17:18.809  2024/12/16 06:27:35 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters
00:17:18.809  request:
00:17:18.809  {
00:17:18.809    "method": "bdev_nvme_attach_controller",
00:17:18.809    "params": {
00:17:18.809      "name": "TLSTEST",
00:17:18.809      "trtype": "tcp",
00:17:18.809      "traddr": "10.0.0.2",
00:17:18.809      "hostnqn": "nqn.2016-06.io.spdk:host2",
00:17:18.809      "adrfam": "ipv4",
00:17:18.809      "trsvcid": "4420",
00:17:18.809      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:17:18.809      "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt"
00:17:18.809    }
00:17:18.809  }
00:17:18.809  Got JSON-RPC error response
00:17:18.809  GoRPCClient: error on JSON-RPC call
00:17:18.809   06:27:35	-- target/tls.sh@36 -- # killprocess 78097
00:17:18.809   06:27:35	-- common/autotest_common.sh@936 -- # '[' -z 78097 ']'
00:17:18.809   06:27:35	-- common/autotest_common.sh@940 -- # kill -0 78097
00:17:18.809    06:27:35	-- common/autotest_common.sh@941 -- # uname
00:17:18.809   06:27:35	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:17:18.809    06:27:35	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78097
00:17:18.809  killing process with pid 78097
00:17:18.809  Received shutdown signal, test time was about 10.000000 seconds
00:17:18.809  
00:17:18.809                                                                                                  Latency(us)
00:17:18.809  
[2024-12-16T06:27:35.785Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:17:18.809  
[2024-12-16T06:27:35.785Z]  ===================================================================================================================
00:17:18.809  
[2024-12-16T06:27:35.785Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:17:18.809   06:27:35	-- common/autotest_common.sh@942 -- # process_name=reactor_2
00:17:18.809   06:27:35	-- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']'
00:17:18.809   06:27:35	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 78097'
00:17:18.809   06:27:35	-- common/autotest_common.sh@955 -- # kill 78097
00:17:18.809   06:27:35	-- common/autotest_common.sh@960 -- # wait 78097
00:17:19.068   06:27:35	-- target/tls.sh@37 -- # return 1
00:17:19.068   06:27:35	-- common/autotest_common.sh@653 -- # es=1
00:17:19.068   06:27:35	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:17:19.068   06:27:35	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:17:19.068   06:27:35	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:17:19.068   06:27:35	-- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt
00:17:19.068   06:27:35	-- common/autotest_common.sh@650 -- # local es=0
00:17:19.068   06:27:35	-- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt
00:17:19.068   06:27:35	-- common/autotest_common.sh@638 -- # local arg=run_bdevperf
00:17:19.068   06:27:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:19.068    06:27:35	-- common/autotest_common.sh@642 -- # type -t run_bdevperf
00:17:19.068   06:27:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:19.068   06:27:35	-- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt
00:17:19.068   06:27:35	-- target/tls.sh@22 -- # local subnqn hostnqn psk
00:17:19.068   06:27:35	-- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2
00:17:19.068   06:27:35	-- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1
00:17:19.068   06:27:35	-- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt'
00:17:19.068   06:27:35	-- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:17:19.068   06:27:35	-- target/tls.sh@28 -- # bdevperf_pid=78143
00:17:19.068   06:27:35	-- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:17:19.068   06:27:35	-- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:17:19.069   06:27:35	-- target/tls.sh@31 -- # waitforlisten 78143 /var/tmp/bdevperf.sock
00:17:19.069   06:27:35	-- common/autotest_common.sh@829 -- # '[' -z 78143 ']'
00:17:19.069   06:27:35	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:17:19.069   06:27:35	-- common/autotest_common.sh@834 -- # local max_retries=100
00:17:19.069  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:17:19.069   06:27:35	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:17:19.069   06:27:35	-- common/autotest_common.sh@838 -- # xtrace_disable
00:17:19.069   06:27:35	-- common/autotest_common.sh@10 -- # set +x
00:17:19.328  [2024-12-16 06:27:36.050119] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:17:19.328  [2024-12-16 06:27:36.050239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78143 ]
00:17:19.328  [2024-12-16 06:27:36.180225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:19.328  [2024-12-16 06:27:36.270332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:17:20.265   06:27:36	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:17:20.265   06:27:36	-- common/autotest_common.sh@862 -- # return 0
00:17:20.265   06:27:36	-- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt
00:17:20.265  [2024-12-16 06:27:37.155103] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:17:20.265  [2024-12-16 06:27:37.162715] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2
00:17:20.265  [2024-12-16 06:27:37.162747] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2
00:17:20.265  [2024-12-16 06:27:37.162794] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected
00:17:20.265  [2024-12-16 06:27:37.163495] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e733d0 (107): Transport endpoint is not connected
00:17:20.265  [2024-12-16 06:27:37.164484] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e733d0 (9): Bad file descriptor
00:17:20.265  [2024-12-16 06:27:37.165480] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state
00:17:20.265  [2024-12-16 06:27:37.165521] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2
00:17:20.265  [2024-12-16 06:27:37.165530] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state.
00:17:20.265  2024/12/16 06:27:37 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters
00:17:20.265  request:
00:17:20.265  {
00:17:20.265    "method": "bdev_nvme_attach_controller",
00:17:20.265    "params": {
00:17:20.265      "name": "TLSTEST",
00:17:20.265      "trtype": "tcp",
00:17:20.265      "traddr": "10.0.0.2",
00:17:20.265      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:17:20.265      "adrfam": "ipv4",
00:17:20.265      "trsvcid": "4420",
00:17:20.265      "subnqn": "nqn.2016-06.io.spdk:cnode2",
00:17:20.265      "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt"
00:17:20.265    }
00:17:20.265  }
00:17:20.265  Got JSON-RPC error response
00:17:20.265  GoRPCClient: error on JSON-RPC call
00:17:20.265   06:27:37	-- target/tls.sh@36 -- # killprocess 78143
00:17:20.265   06:27:37	-- common/autotest_common.sh@936 -- # '[' -z 78143 ']'
00:17:20.265   06:27:37	-- common/autotest_common.sh@940 -- # kill -0 78143
00:17:20.265    06:27:37	-- common/autotest_common.sh@941 -- # uname
00:17:20.265   06:27:37	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:17:20.265    06:27:37	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78143
00:17:20.265  killing process with pid 78143
00:17:20.265  Received shutdown signal, test time was about 10.000000 seconds
00:17:20.265  
00:17:20.265                                                                                                  Latency(us)
00:17:20.265  
[2024-12-16T06:27:37.241Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:17:20.265  
[2024-12-16T06:27:37.241Z]  ===================================================================================================================
00:17:20.265  
[2024-12-16T06:27:37.241Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:17:20.265   06:27:37	-- common/autotest_common.sh@942 -- # process_name=reactor_2
00:17:20.265   06:27:37	-- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']'
00:17:20.265   06:27:37	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 78143'
00:17:20.265   06:27:37	-- common/autotest_common.sh@955 -- # kill 78143
00:17:20.265   06:27:37	-- common/autotest_common.sh@960 -- # wait 78143
00:17:20.523   06:27:37	-- target/tls.sh@37 -- # return 1
00:17:20.523   06:27:37	-- common/autotest_common.sh@653 -- # es=1
00:17:20.523   06:27:37	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:17:20.523   06:27:37	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:17:20.523   06:27:37	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:17:20.523   06:27:37	-- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 ''
00:17:20.523   06:27:37	-- common/autotest_common.sh@650 -- # local es=0
00:17:20.523   06:27:37	-- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 ''
00:17:20.523   06:27:37	-- common/autotest_common.sh@638 -- # local arg=run_bdevperf
00:17:20.523   06:27:37	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:20.523    06:27:37	-- common/autotest_common.sh@642 -- # type -t run_bdevperf
00:17:20.523   06:27:37	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:20.523   06:27:37	-- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 ''
00:17:20.523   06:27:37	-- target/tls.sh@22 -- # local subnqn hostnqn psk
00:17:20.523   06:27:37	-- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1
00:17:20.523   06:27:37	-- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1
00:17:20.523   06:27:37	-- target/tls.sh@23 -- # psk=
00:17:20.524   06:27:37	-- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:17:20.524   06:27:37	-- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:17:20.524   06:27:37	-- target/tls.sh@28 -- # bdevperf_pid=78187
00:17:20.524   06:27:37	-- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:17:20.524   06:27:37	-- target/tls.sh@31 -- # waitforlisten 78187 /var/tmp/bdevperf.sock
00:17:20.524   06:27:37	-- common/autotest_common.sh@829 -- # '[' -z 78187 ']'
00:17:20.524   06:27:37	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:17:20.524   06:27:37	-- common/autotest_common.sh@834 -- # local max_retries=100
00:17:20.524  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:17:20.524   06:27:37	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:17:20.524   06:27:37	-- common/autotest_common.sh@838 -- # xtrace_disable
00:17:20.524   06:27:37	-- common/autotest_common.sh@10 -- # set +x
00:17:20.524  [2024-12-16 06:27:37.481070] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:17:20.524  [2024-12-16 06:27:37.481174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78187 ]
00:17:20.782  [2024-12-16 06:27:37.610987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:20.782  [2024-12-16 06:27:37.680186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:17:21.719   06:27:38	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:17:21.719   06:27:38	-- common/autotest_common.sh@862 -- # return 0
00:17:21.719   06:27:38	-- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1
00:17:21.719  [2024-12-16 06:27:38.672915] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected
00:17:21.719  [2024-12-16 06:27:38.674644] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x781dc0 (9): Bad file descriptor
00:17:21.719  [2024-12-16 06:27:38.675638] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state
00:17:21.719  [2024-12-16 06:27:38.675675] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2
00:17:21.719  [2024-12-16 06:27:38.675685] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state.
00:17:21.719  2024/12/16 06:27:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters
00:17:21.719  request:
00:17:21.719  {
00:17:21.719    "method": "bdev_nvme_attach_controller",
00:17:21.719    "params": {
00:17:21.719      "name": "TLSTEST",
00:17:21.719      "trtype": "tcp",
00:17:21.719      "traddr": "10.0.0.2",
00:17:21.719      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:17:21.719      "adrfam": "ipv4",
00:17:21.719      "trsvcid": "4420",
00:17:21.719      "subnqn": "nqn.2016-06.io.spdk:cnode1"
00:17:21.719    }
00:17:21.719  }
00:17:21.719  Got JSON-RPC error response
00:17:21.719  GoRPCClient: error on JSON-RPC call
00:17:21.978   06:27:38	-- target/tls.sh@36 -- # killprocess 78187
00:17:21.978   06:27:38	-- common/autotest_common.sh@936 -- # '[' -z 78187 ']'
00:17:21.978   06:27:38	-- common/autotest_common.sh@940 -- # kill -0 78187
00:17:21.978    06:27:38	-- common/autotest_common.sh@941 -- # uname
00:17:21.978   06:27:38	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:17:21.978    06:27:38	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78187
00:17:21.978  killing process with pid 78187
00:17:21.978  Received shutdown signal, test time was about 10.000000 seconds
00:17:21.978  
00:17:21.978                                                                                                  Latency(us)
00:17:21.978  
[2024-12-16T06:27:38.954Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:17:21.978  
[2024-12-16T06:27:38.954Z]  ===================================================================================================================
00:17:21.978  
[2024-12-16T06:27:38.954Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:17:21.978   06:27:38	-- common/autotest_common.sh@942 -- # process_name=reactor_2
00:17:21.978   06:27:38	-- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']'
00:17:21.978   06:27:38	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 78187'
00:17:21.978   06:27:38	-- common/autotest_common.sh@955 -- # kill 78187
00:17:21.978   06:27:38	-- common/autotest_common.sh@960 -- # wait 78187
00:17:21.978   06:27:38	-- target/tls.sh@37 -- # return 1
00:17:21.979   06:27:38	-- common/autotest_common.sh@653 -- # es=1
00:17:21.979   06:27:38	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:17:21.979   06:27:38	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:17:21.979   06:27:38	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:17:21.979   06:27:38	-- target/tls.sh@167 -- # killprocess 77541
00:17:21.979   06:27:38	-- common/autotest_common.sh@936 -- # '[' -z 77541 ']'
00:17:21.979   06:27:38	-- common/autotest_common.sh@940 -- # kill -0 77541
00:17:21.979    06:27:38	-- common/autotest_common.sh@941 -- # uname
00:17:21.979   06:27:38	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:17:21.979    06:27:38	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77541
00:17:22.238  killing process with pid 77541
00:17:22.238   06:27:38	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:17:22.238   06:27:38	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:17:22.238   06:27:38	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 77541'
00:17:22.238   06:27:38	-- common/autotest_common.sh@955 -- # kill 77541
00:17:22.238   06:27:38	-- common/autotest_common.sh@960 -- # wait 77541
00:17:22.499    06:27:39	-- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02
00:17:22.499    06:27:39	-- target/tls.sh@49 -- # local key hash crc
00:17:22.499    06:27:39	-- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677
00:17:22.499    06:27:39	-- target/tls.sh@51 -- # hash=02
00:17:22.499     06:27:39	-- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677
00:17:22.499     06:27:39	-- target/tls.sh@52 -- # tail -c8
00:17:22.499     06:27:39	-- target/tls.sh@52 -- # gzip -1 -c
00:17:22.499     06:27:39	-- target/tls.sh@52 -- # head -c 4
00:17:22.499    06:27:39	-- target/tls.sh@52 -- # crc='�e�'\'''
00:17:22.499     06:27:39	-- target/tls.sh@54 -- # base64 /dev/fd/62
00:17:22.499      06:27:39	-- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\'''
00:17:22.499    06:27:39	-- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==:
00:17:22.499   06:27:39	-- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==:
00:17:22.499   06:27:39	-- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt
00:17:22.499   06:27:39	-- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==:
00:17:22.499   06:27:39	-- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt
00:17:22.499   06:27:39	-- target/tls.sh@172 -- # nvmfappstart -m 0x2
00:17:22.499   06:27:39	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:17:22.499   06:27:39	-- common/autotest_common.sh@722 -- # xtrace_disable
00:17:22.499   06:27:39	-- common/autotest_common.sh@10 -- # set +x
00:17:22.499   06:27:39	-- nvmf/common.sh@469 -- # nvmfpid=78249
00:17:22.499   06:27:39	-- nvmf/common.sh@470 -- # waitforlisten 78249
00:17:22.499   06:27:39	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:17:22.499   06:27:39	-- common/autotest_common.sh@829 -- # '[' -z 78249 ']'
00:17:22.499   06:27:39	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:22.499   06:27:39	-- common/autotest_common.sh@834 -- # local max_retries=100
00:17:22.499   06:27:39	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:22.500  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:22.500   06:27:39	-- common/autotest_common.sh@838 -- # xtrace_disable
00:17:22.500   06:27:39	-- common/autotest_common.sh@10 -- # set +x
00:17:22.500  [2024-12-16 06:27:39.364599] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:17:22.500  [2024-12-16 06:27:39.364692] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:17:22.758  [2024-12-16 06:27:39.494173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:22.758  [2024-12-16 06:27:39.593328] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:17:22.758  [2024-12-16 06:27:39.593460] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:17:22.758  [2024-12-16 06:27:39.593472] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:17:22.758  [2024-12-16 06:27:39.593481] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:17:22.758  [2024-12-16 06:27:39.593525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:17:23.693   06:27:40	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:17:23.693   06:27:40	-- common/autotest_common.sh@862 -- # return 0
00:17:23.693   06:27:40	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:17:23.693   06:27:40	-- common/autotest_common.sh@728 -- # xtrace_disable
00:17:23.693   06:27:40	-- common/autotest_common.sh@10 -- # set +x
00:17:23.693   06:27:40	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:17:23.693   06:27:40	-- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt
00:17:23.693   06:27:40	-- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt
00:17:23.693   06:27:40	-- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o
00:17:23.693  [2024-12-16 06:27:40.640199] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:17:23.693   06:27:40	-- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10
00:17:23.951   06:27:40	-- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k
00:17:24.210  [2024-12-16 06:27:41.168277] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:17:24.210  [2024-12-16 06:27:41.168506] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:17:24.468   06:27:41	-- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0
00:17:24.727  malloc0
00:17:24.727   06:27:41	-- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1
00:17:24.727   06:27:41	-- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt
00:17:24.986   06:27:41	-- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt
00:17:24.986   06:27:41	-- target/tls.sh@22 -- # local subnqn hostnqn psk
00:17:24.986   06:27:41	-- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1
00:17:24.986   06:27:41	-- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1
00:17:24.986   06:27:41	-- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt'
00:17:24.986   06:27:41	-- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:17:24.986   06:27:41	-- target/tls.sh@28 -- # bdevperf_pid=78346
00:17:24.986   06:27:41	-- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:17:24.986   06:27:41	-- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:17:24.986   06:27:41	-- target/tls.sh@31 -- # waitforlisten 78346 /var/tmp/bdevperf.sock
00:17:24.986   06:27:41	-- common/autotest_common.sh@829 -- # '[' -z 78346 ']'
00:17:24.986   06:27:41	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:17:24.986   06:27:41	-- common/autotest_common.sh@834 -- # local max_retries=100
00:17:24.986   06:27:41	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:17:24.986  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:17:24.986   06:27:41	-- common/autotest_common.sh@838 -- # xtrace_disable
00:17:24.986   06:27:41	-- common/autotest_common.sh@10 -- # set +x
00:17:25.245  [2024-12-16 06:27:41.997521] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:17:25.245  [2024-12-16 06:27:41.997615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78346 ]
00:17:25.245  [2024-12-16 06:27:42.131437] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:25.503  [2024-12-16 06:27:42.242293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:17:26.071   06:27:42	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:17:26.071   06:27:42	-- common/autotest_common.sh@862 -- # return 0
00:17:26.071   06:27:42	-- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt
00:17:26.330  [2024-12-16 06:27:43.133690] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:17:26.330  TLSTESTn1
00:17:26.330   06:27:43	-- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests
00:17:26.588  Running I/O for 10 seconds...
00:17:36.630  
00:17:36.630                                                                                                  Latency(us)
00:17:36.630  
[2024-12-16T06:27:53.606Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:17:36.630  
[2024-12-16T06:27:53.606Z]  Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:17:36.630  	 Verification LBA range: start 0x0 length 0x2000
00:17:36.630  	 TLSTESTn1           :      10.01    6559.47      25.62       0.00     0.00   19485.74    4647.10   21209.83
00:17:36.630  
[2024-12-16T06:27:53.606Z]  ===================================================================================================================
00:17:36.630  
[2024-12-16T06:27:53.606Z]  Total                       :               6559.47      25.62       0.00     0.00   19485.74    4647.10   21209.83
00:17:36.630  0
00:17:36.630   06:27:53	-- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:17:36.630   06:27:53	-- target/tls.sh@45 -- # killprocess 78346
00:17:36.630   06:27:53	-- common/autotest_common.sh@936 -- # '[' -z 78346 ']'
00:17:36.630   06:27:53	-- common/autotest_common.sh@940 -- # kill -0 78346
00:17:36.630    06:27:53	-- common/autotest_common.sh@941 -- # uname
00:17:36.630   06:27:53	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:17:36.630    06:27:53	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78346
00:17:36.630   06:27:53	-- common/autotest_common.sh@942 -- # process_name=reactor_2
00:17:36.630   06:27:53	-- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']'
00:17:36.630  killing process with pid 78346
00:17:36.630   06:27:53	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 78346'
00:17:36.630  Received shutdown signal, test time was about 10.000000 seconds
00:17:36.630  
00:17:36.630                                                                                                  Latency(us)
00:17:36.630  
[2024-12-16T06:27:53.606Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:17:36.630  
[2024-12-16T06:27:53.606Z]  ===================================================================================================================
00:17:36.630  
[2024-12-16T06:27:53.606Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:17:36.630   06:27:53	-- common/autotest_common.sh@955 -- # kill 78346
00:17:36.630   06:27:53	-- common/autotest_common.sh@960 -- # wait 78346
00:17:36.889   06:27:53	-- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt
00:17:36.889   06:27:53	-- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt
00:17:36.889   06:27:53	-- common/autotest_common.sh@650 -- # local es=0
00:17:36.889   06:27:53	-- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt
00:17:36.889   06:27:53	-- common/autotest_common.sh@638 -- # local arg=run_bdevperf
00:17:36.889   06:27:53	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:36.889    06:27:53	-- common/autotest_common.sh@642 -- # type -t run_bdevperf
00:17:36.889   06:27:53	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:36.889   06:27:53	-- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt
00:17:36.889   06:27:53	-- target/tls.sh@22 -- # local subnqn hostnqn psk
00:17:36.889   06:27:53	-- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1
00:17:36.889   06:27:53	-- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1
00:17:36.889   06:27:53	-- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt'
00:17:36.889   06:27:53	-- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:17:36.889   06:27:53	-- target/tls.sh@28 -- # bdevperf_pid=78499
00:17:36.889   06:27:53	-- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:17:36.889   06:27:53	-- target/tls.sh@31 -- # waitforlisten 78499 /var/tmp/bdevperf.sock
00:17:36.889   06:27:53	-- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:17:36.889   06:27:53	-- common/autotest_common.sh@829 -- # '[' -z 78499 ']'
00:17:36.889   06:27:53	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:17:36.889   06:27:53	-- common/autotest_common.sh@834 -- # local max_retries=100
00:17:36.889  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:17:36.889   06:27:53	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:17:36.889   06:27:53	-- common/autotest_common.sh@838 -- # xtrace_disable
00:17:36.889   06:27:53	-- common/autotest_common.sh@10 -- # set +x
00:17:36.889  [2024-12-16 06:27:53.707666] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:17:36.889  [2024-12-16 06:27:53.707766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78499 ]
00:17:36.889  [2024-12-16 06:27:53.840764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:37.148  [2024-12-16 06:27:53.927301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:17:37.715   06:27:54	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:17:37.715   06:27:54	-- common/autotest_common.sh@862 -- # return 0
00:17:37.716   06:27:54	-- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt
00:17:37.975  [2024-12-16 06:27:54.839360] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:17:37.975  [2024-12-16 06:27:54.839834] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file
00:17:37.975  2024/12/16 06:27:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt
00:17:37.975  request:
00:17:37.975  {
00:17:37.975    "method": "bdev_nvme_attach_controller",
00:17:37.975    "params": {
00:17:37.975      "name": "TLSTEST",
00:17:37.975      "trtype": "tcp",
00:17:37.975      "traddr": "10.0.0.2",
00:17:37.975      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:17:37.975      "adrfam": "ipv4",
00:17:37.975      "trsvcid": "4420",
00:17:37.975      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:17:37.975      "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt"
00:17:37.975    }
00:17:37.975  }
00:17:37.975  Got JSON-RPC error response
00:17:37.975  GoRPCClient: error on JSON-RPC call
00:17:37.975   06:27:54	-- target/tls.sh@36 -- # killprocess 78499
00:17:37.975   06:27:54	-- common/autotest_common.sh@936 -- # '[' -z 78499 ']'
00:17:37.975   06:27:54	-- common/autotest_common.sh@940 -- # kill -0 78499
00:17:37.975    06:27:54	-- common/autotest_common.sh@941 -- # uname
00:17:37.975   06:27:54	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:17:37.975    06:27:54	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78499
00:17:37.975   06:27:54	-- common/autotest_common.sh@942 -- # process_name=reactor_2
00:17:37.975   06:27:54	-- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']'
00:17:37.975  killing process with pid 78499
00:17:37.975   06:27:54	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 78499'
00:17:37.975  Received shutdown signal, test time was about 10.000000 seconds
00:17:37.975  
00:17:37.975                                                                                                  Latency(us)
00:17:37.975  
[2024-12-16T06:27:54.951Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:17:37.975  
[2024-12-16T06:27:54.951Z]  ===================================================================================================================
00:17:37.975  
[2024-12-16T06:27:54.951Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:17:37.975   06:27:54	-- common/autotest_common.sh@955 -- # kill 78499
00:17:37.975   06:27:54	-- common/autotest_common.sh@960 -- # wait 78499
00:17:38.234   06:27:55	-- target/tls.sh@37 -- # return 1
00:17:38.234   06:27:55	-- common/autotest_common.sh@653 -- # es=1
00:17:38.234   06:27:55	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:17:38.234   06:27:55	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:17:38.234   06:27:55	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:17:38.234   06:27:55	-- target/tls.sh@183 -- # killprocess 78249
00:17:38.234   06:27:55	-- common/autotest_common.sh@936 -- # '[' -z 78249 ']'
00:17:38.234   06:27:55	-- common/autotest_common.sh@940 -- # kill -0 78249
00:17:38.234    06:27:55	-- common/autotest_common.sh@941 -- # uname
00:17:38.234   06:27:55	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:17:38.234    06:27:55	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78249
00:17:38.234   06:27:55	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:17:38.234   06:27:55	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:17:38.234  killing process with pid 78249
00:17:38.234   06:27:55	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 78249'
00:17:38.234   06:27:55	-- common/autotest_common.sh@955 -- # kill 78249
00:17:38.234   06:27:55	-- common/autotest_common.sh@960 -- # wait 78249
00:17:38.492   06:27:55	-- target/tls.sh@184 -- # nvmfappstart -m 0x2
00:17:38.492   06:27:55	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:17:38.492   06:27:55	-- common/autotest_common.sh@722 -- # xtrace_disable
00:17:38.492   06:27:55	-- common/autotest_common.sh@10 -- # set +x
00:17:38.492   06:27:55	-- nvmf/common.sh@469 -- # nvmfpid=78555
00:17:38.492   06:27:55	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:17:38.492   06:27:55	-- nvmf/common.sh@470 -- # waitforlisten 78555
00:17:38.492   06:27:55	-- common/autotest_common.sh@829 -- # '[' -z 78555 ']'
00:17:38.750   06:27:55	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:38.750   06:27:55	-- common/autotest_common.sh@834 -- # local max_retries=100
00:17:38.750  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:38.750   06:27:55	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:38.750   06:27:55	-- common/autotest_common.sh@838 -- # xtrace_disable
00:17:38.750   06:27:55	-- common/autotest_common.sh@10 -- # set +x
00:17:38.750  [2024-12-16 06:27:55.509820] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:17:38.750  [2024-12-16 06:27:55.509896] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:17:38.750  [2024-12-16 06:27:55.633043] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:38.750  [2024-12-16 06:27:55.712107] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:17:38.750  [2024-12-16 06:27:55.712264] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:17:38.750  [2024-12-16 06:27:55.712277] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:17:38.750  [2024-12-16 06:27:55.712285] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:17:38.750  [2024-12-16 06:27:55.712317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:17:39.686   06:27:56	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:17:39.686   06:27:56	-- common/autotest_common.sh@862 -- # return 0
00:17:39.686   06:27:56	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:17:39.686   06:27:56	-- common/autotest_common.sh@728 -- # xtrace_disable
00:17:39.686   06:27:56	-- common/autotest_common.sh@10 -- # set +x
00:17:39.686   06:27:56	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:17:39.686   06:27:56	-- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt
00:17:39.686   06:27:56	-- common/autotest_common.sh@650 -- # local es=0
00:17:39.686   06:27:56	-- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt
00:17:39.686   06:27:56	-- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt
00:17:39.686   06:27:56	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:39.686    06:27:56	-- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt
00:17:39.686   06:27:56	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:39.686   06:27:56	-- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt
00:17:39.686   06:27:56	-- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt
00:17:39.686   06:27:56	-- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o
00:17:39.944  [2024-12-16 06:27:56.795478] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:17:39.944   06:27:56	-- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10
00:17:40.203   06:27:57	-- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k
00:17:40.461  [2024-12-16 06:27:57.243553] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:17:40.461  [2024-12-16 06:27:57.243781] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:17:40.462   06:27:57	-- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0
00:17:40.720  malloc0
00:17:40.720   06:27:57	-- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1
00:17:40.979   06:27:57	-- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt
00:17:40.979  [2024-12-16 06:27:57.936819] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file
00:17:40.979  [2024-12-16 06:27:57.936848] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file
00:17:40.979  [2024-12-16 06:27:57.936864] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport
00:17:40.979  2024/12/16 06:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error
00:17:40.979  request:
00:17:40.979  {
00:17:40.979    "method": "nvmf_subsystem_add_host",
00:17:40.979    "params": {
00:17:40.979      "nqn": "nqn.2016-06.io.spdk:cnode1",
00:17:40.979      "host": "nqn.2016-06.io.spdk:host1",
00:17:40.979      "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt"
00:17:40.979    }
00:17:40.979  }
00:17:40.979  Got JSON-RPC error response
00:17:40.979  GoRPCClient: error on JSON-RPC call
00:17:41.237   06:27:57	-- common/autotest_common.sh@653 -- # es=1
00:17:41.237   06:27:57	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:17:41.237   06:27:57	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:17:41.237   06:27:57	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:17:41.237   06:27:57	-- target/tls.sh@189 -- # killprocess 78555
00:17:41.237   06:27:57	-- common/autotest_common.sh@936 -- # '[' -z 78555 ']'
00:17:41.237   06:27:57	-- common/autotest_common.sh@940 -- # kill -0 78555
00:17:41.237    06:27:57	-- common/autotest_common.sh@941 -- # uname
00:17:41.238   06:27:57	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:17:41.238    06:27:57	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78555
00:17:41.238   06:27:57	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:17:41.238   06:27:57	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:17:41.238  killing process with pid 78555
00:17:41.238   06:27:57	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 78555'
00:17:41.238   06:27:57	-- common/autotest_common.sh@955 -- # kill 78555
00:17:41.238   06:27:57	-- common/autotest_common.sh@960 -- # wait 78555
00:17:41.496   06:27:58	-- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt
00:17:41.496   06:27:58	-- target/tls.sh@193 -- # nvmfappstart -m 0x2
00:17:41.496   06:27:58	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:17:41.496   06:27:58	-- common/autotest_common.sh@722 -- # xtrace_disable
00:17:41.496   06:27:58	-- common/autotest_common.sh@10 -- # set +x
00:17:41.496   06:27:58	-- nvmf/common.sh@469 -- # nvmfpid=78660
00:17:41.496   06:27:58	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:17:41.496   06:27:58	-- nvmf/common.sh@470 -- # waitforlisten 78660
00:17:41.496   06:27:58	-- common/autotest_common.sh@829 -- # '[' -z 78660 ']'
00:17:41.496   06:27:58	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:41.496   06:27:58	-- common/autotest_common.sh@834 -- # local max_retries=100
00:17:41.496  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:41.496   06:27:58	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:41.496   06:27:58	-- common/autotest_common.sh@838 -- # xtrace_disable
00:17:41.496   06:27:58	-- common/autotest_common.sh@10 -- # set +x
00:17:41.496  [2024-12-16 06:27:58.375545] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:17:41.496  [2024-12-16 06:27:58.375642] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:17:41.755  [2024-12-16 06:27:58.509718] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:41.755  [2024-12-16 06:27:58.584824] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:17:41.755  [2024-12-16 06:27:58.584969] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:17:41.755  [2024-12-16 06:27:58.584981] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:17:41.755  [2024-12-16 06:27:58.584989] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:17:41.755  [2024-12-16 06:27:58.585020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:17:42.691   06:27:59	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:17:42.691   06:27:59	-- common/autotest_common.sh@862 -- # return 0
00:17:42.691   06:27:59	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:17:42.691   06:27:59	-- common/autotest_common.sh@728 -- # xtrace_disable
00:17:42.691   06:27:59	-- common/autotest_common.sh@10 -- # set +x
00:17:42.691   06:27:59	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:17:42.691   06:27:59	-- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt
00:17:42.691   06:27:59	-- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt
00:17:42.691   06:27:59	-- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o
00:17:42.691  [2024-12-16 06:27:59.615751] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:17:42.691   06:27:59	-- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10
00:17:42.950   06:27:59	-- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k
00:17:43.209  [2024-12-16 06:28:00.107939] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:17:43.209  [2024-12-16 06:28:00.108213] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:17:43.209   06:28:00	-- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0
00:17:43.467  malloc0
00:17:43.468   06:28:00	-- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1
00:17:43.726   06:28:00	-- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt
00:17:43.986   06:28:00	-- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:17:43.986   06:28:00	-- target/tls.sh@197 -- # bdevperf_pid=78767
00:17:43.986   06:28:00	-- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:17:43.986   06:28:00	-- target/tls.sh@200 -- # waitforlisten 78767 /var/tmp/bdevperf.sock
00:17:43.986   06:28:00	-- common/autotest_common.sh@829 -- # '[' -z 78767 ']'
00:17:43.986   06:28:00	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:17:43.986   06:28:00	-- common/autotest_common.sh@834 -- # local max_retries=100
00:17:43.986   06:28:00	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:17:43.986  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:17:43.986   06:28:00	-- common/autotest_common.sh@838 -- # xtrace_disable
00:17:43.986   06:28:00	-- common/autotest_common.sh@10 -- # set +x
00:17:43.986  [2024-12-16 06:28:00.795536] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:17:43.986  [2024-12-16 06:28:00.795604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78767 ]
00:17:43.986  [2024-12-16 06:28:00.930404] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:44.245  [2024-12-16 06:28:01.031261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:17:44.811   06:28:01	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:17:44.811   06:28:01	-- common/autotest_common.sh@862 -- # return 0
00:17:44.811   06:28:01	-- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt
00:17:45.070  [2024-12-16 06:28:01.938640] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:17:45.070  TLSTESTn1
00:17:45.070    06:28:02	-- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config
00:17:45.638   06:28:02	-- target/tls.sh@205 -- # tgtconf='{
00:17:45.638    "subsystems": [
00:17:45.638      {
00:17:45.638        "subsystem": "iobuf",
00:17:45.638        "config": [
00:17:45.638          {
00:17:45.638            "method": "iobuf_set_options",
00:17:45.638            "params": {
00:17:45.638              "large_bufsize": 135168,
00:17:45.638              "large_pool_count": 1024,
00:17:45.638              "small_bufsize": 8192,
00:17:45.638              "small_pool_count": 8192
00:17:45.638            }
00:17:45.638          }
00:17:45.638        ]
00:17:45.638      },
00:17:45.638      {
00:17:45.638        "subsystem": "sock",
00:17:45.638        "config": [
00:17:45.638          {
00:17:45.638            "method": "sock_impl_set_options",
00:17:45.638            "params": {
00:17:45.638              "enable_ktls": false,
00:17:45.638              "enable_placement_id": 0,
00:17:45.638              "enable_quickack": false,
00:17:45.638              "enable_recv_pipe": true,
00:17:45.638              "enable_zerocopy_send_client": false,
00:17:45.638              "enable_zerocopy_send_server": true,
00:17:45.638              "impl_name": "posix",
00:17:45.638              "recv_buf_size": 2097152,
00:17:45.638              "send_buf_size": 2097152,
00:17:45.638              "tls_version": 0,
00:17:45.638              "zerocopy_threshold": 0
00:17:45.638            }
00:17:45.638          },
00:17:45.638          {
00:17:45.638            "method": "sock_impl_set_options",
00:17:45.638            "params": {
00:17:45.638              "enable_ktls": false,
00:17:45.638              "enable_placement_id": 0,
00:17:45.638              "enable_quickack": false,
00:17:45.638              "enable_recv_pipe": true,
00:17:45.638              "enable_zerocopy_send_client": false,
00:17:45.638              "enable_zerocopy_send_server": true,
00:17:45.638              "impl_name": "ssl",
00:17:45.638              "recv_buf_size": 4096,
00:17:45.638              "send_buf_size": 4096,
00:17:45.638              "tls_version": 0,
00:17:45.638              "zerocopy_threshold": 0
00:17:45.638            }
00:17:45.638          }
00:17:45.638        ]
00:17:45.638      },
00:17:45.638      {
00:17:45.638        "subsystem": "vmd",
00:17:45.638        "config": []
00:17:45.638      },
00:17:45.638      {
00:17:45.638        "subsystem": "accel",
00:17:45.638        "config": [
00:17:45.638          {
00:17:45.638            "method": "accel_set_options",
00:17:45.638            "params": {
00:17:45.638              "buf_count": 2048,
00:17:45.638              "large_cache_size": 16,
00:17:45.638              "sequence_count": 2048,
00:17:45.638              "small_cache_size": 128,
00:17:45.638              "task_count": 2048
00:17:45.638            }
00:17:45.638          }
00:17:45.638        ]
00:17:45.638      },
00:17:45.638      {
00:17:45.638        "subsystem": "bdev",
00:17:45.638        "config": [
00:17:45.638          {
00:17:45.638            "method": "bdev_set_options",
00:17:45.638            "params": {
00:17:45.638              "bdev_auto_examine": true,
00:17:45.638              "bdev_io_cache_size": 256,
00:17:45.638              "bdev_io_pool_size": 65535,
00:17:45.638              "iobuf_large_cache_size": 16,
00:17:45.638              "iobuf_small_cache_size": 128
00:17:45.638            }
00:17:45.638          },
00:17:45.638          {
00:17:45.638            "method": "bdev_raid_set_options",
00:17:45.638            "params": {
00:17:45.638              "process_window_size_kb": 1024
00:17:45.638            }
00:17:45.638          },
00:17:45.638          {
00:17:45.638            "method": "bdev_iscsi_set_options",
00:17:45.638            "params": {
00:17:45.638              "timeout_sec": 30
00:17:45.638            }
00:17:45.638          },
00:17:45.638          {
00:17:45.638            "method": "bdev_nvme_set_options",
00:17:45.638            "params": {
00:17:45.638              "action_on_timeout": "none",
00:17:45.638              "allow_accel_sequence": false,
00:17:45.638              "arbitration_burst": 0,
00:17:45.638              "bdev_retry_count": 3,
00:17:45.638              "ctrlr_loss_timeout_sec": 0,
00:17:45.638              "delay_cmd_submit": true,
00:17:45.638              "fast_io_fail_timeout_sec": 0,
00:17:45.638              "generate_uuids": false,
00:17:45.638              "high_priority_weight": 0,
00:17:45.638              "io_path_stat": false,
00:17:45.638              "io_queue_requests": 0,
00:17:45.638              "keep_alive_timeout_ms": 10000,
00:17:45.638              "low_priority_weight": 0,
00:17:45.638              "medium_priority_weight": 0,
00:17:45.638              "nvme_adminq_poll_period_us": 10000,
00:17:45.638              "nvme_ioq_poll_period_us": 0,
00:17:45.638              "reconnect_delay_sec": 0,
00:17:45.638              "timeout_admin_us": 0,
00:17:45.638              "timeout_us": 0,
00:17:45.638              "transport_ack_timeout": 0,
00:17:45.638              "transport_retry_count": 4,
00:17:45.638              "transport_tos": 0
00:17:45.638            }
00:17:45.638          },
00:17:45.638          {
00:17:45.638            "method": "bdev_nvme_set_hotplug",
00:17:45.638            "params": {
00:17:45.638              "enable": false,
00:17:45.638              "period_us": 100000
00:17:45.638            }
00:17:45.638          },
00:17:45.638          {
00:17:45.638            "method": "bdev_malloc_create",
00:17:45.638            "params": {
00:17:45.638              "block_size": 4096,
00:17:45.638              "name": "malloc0",
00:17:45.638              "num_blocks": 8192,
00:17:45.638              "optimal_io_boundary": 0,
00:17:45.638              "physical_block_size": 4096,
00:17:45.638              "uuid": "d69ebdc1-e597-46d9-860e-ea3db1ac9573"
00:17:45.638            }
00:17:45.638          },
00:17:45.638          {
00:17:45.638            "method": "bdev_wait_for_examine"
00:17:45.638          }
00:17:45.638        ]
00:17:45.638      },
00:17:45.638      {
00:17:45.638        "subsystem": "nbd",
00:17:45.638        "config": []
00:17:45.638      },
00:17:45.639      {
00:17:45.639        "subsystem": "scheduler",
00:17:45.639        "config": [
00:17:45.639          {
00:17:45.639            "method": "framework_set_scheduler",
00:17:45.639            "params": {
00:17:45.639              "name": "static"
00:17:45.639            }
00:17:45.639          }
00:17:45.639        ]
00:17:45.639      },
00:17:45.639      {
00:17:45.639        "subsystem": "nvmf",
00:17:45.639        "config": [
00:17:45.639          {
00:17:45.639            "method": "nvmf_set_config",
00:17:45.639            "params": {
00:17:45.639              "admin_cmd_passthru": {
00:17:45.639                "identify_ctrlr": false
00:17:45.639              },
00:17:45.639              "discovery_filter": "match_any"
00:17:45.639            }
00:17:45.639          },
00:17:45.639          {
00:17:45.639            "method": "nvmf_set_max_subsystems",
00:17:45.639            "params": {
00:17:45.639              "max_subsystems": 1024
00:17:45.639            }
00:17:45.639          },
00:17:45.639          {
00:17:45.639            "method": "nvmf_set_crdt",
00:17:45.639            "params": {
00:17:45.639              "crdt1": 0,
00:17:45.639              "crdt2": 0,
00:17:45.639              "crdt3": 0
00:17:45.639            }
00:17:45.639          },
00:17:45.639          {
00:17:45.639            "method": "nvmf_create_transport",
00:17:45.639            "params": {
00:17:45.639              "abort_timeout_sec": 1,
00:17:45.639              "buf_cache_size": 4294967295,
00:17:45.639              "c2h_success": false,
00:17:45.639              "dif_insert_or_strip": false,
00:17:45.639              "in_capsule_data_size": 4096,
00:17:45.639              "io_unit_size": 131072,
00:17:45.639              "max_aq_depth": 128,
00:17:45.639              "max_io_qpairs_per_ctrlr": 127,
00:17:45.639              "max_io_size": 131072,
00:17:45.639              "max_queue_depth": 128,
00:17:45.639              "num_shared_buffers": 511,
00:17:45.639              "sock_priority": 0,
00:17:45.639              "trtype": "TCP",
00:17:45.639              "zcopy": false
00:17:45.639            }
00:17:45.639          },
00:17:45.639          {
00:17:45.639            "method": "nvmf_create_subsystem",
00:17:45.639            "params": {
00:17:45.639              "allow_any_host": false,
00:17:45.639              "ana_reporting": false,
00:17:45.639              "max_cntlid": 65519,
00:17:45.639              "max_namespaces": 10,
00:17:45.639              "min_cntlid": 1,
00:17:45.639              "model_number": "SPDK bdev Controller",
00:17:45.639              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:17:45.639              "serial_number": "SPDK00000000000001"
00:17:45.639            }
00:17:45.639          },
00:17:45.639          {
00:17:45.639            "method": "nvmf_subsystem_add_host",
00:17:45.639            "params": {
00:17:45.639              "host": "nqn.2016-06.io.spdk:host1",
00:17:45.639              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:17:45.639              "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt"
00:17:45.639            }
00:17:45.639          },
00:17:45.639          {
00:17:45.639            "method": "nvmf_subsystem_add_ns",
00:17:45.639            "params": {
00:17:45.639              "namespace": {
00:17:45.639                "bdev_name": "malloc0",
00:17:45.639                "nguid": "D69EBDC1E59746D9860EEA3DB1AC9573",
00:17:45.639                "nsid": 1,
00:17:45.639                "uuid": "d69ebdc1-e597-46d9-860e-ea3db1ac9573"
00:17:45.639              },
00:17:45.639              "nqn": "nqn.2016-06.io.spdk:cnode1"
00:17:45.639            }
00:17:45.639          },
00:17:45.639          {
00:17:45.639            "method": "nvmf_subsystem_add_listener",
00:17:45.639            "params": {
00:17:45.639              "listen_address": {
00:17:45.639                "adrfam": "IPv4",
00:17:45.639                "traddr": "10.0.0.2",
00:17:45.639                "trsvcid": "4420",
00:17:45.639                "trtype": "TCP"
00:17:45.639              },
00:17:45.639              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:17:45.639              "secure_channel": true
00:17:45.639            }
00:17:45.639          }
00:17:45.639        ]
00:17:45.639      }
00:17:45.639    ]
00:17:45.639  }'
00:17:45.639    06:28:02	-- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config
00:17:45.639   06:28:02	-- target/tls.sh@206 -- # bdevperfconf='{
00:17:45.639    "subsystems": [
00:17:45.639      {
00:17:45.639        "subsystem": "iobuf",
00:17:45.639        "config": [
00:17:45.639          {
00:17:45.639            "method": "iobuf_set_options",
00:17:45.639            "params": {
00:17:45.639              "large_bufsize": 135168,
00:17:45.639              "large_pool_count": 1024,
00:17:45.639              "small_bufsize": 8192,
00:17:45.639              "small_pool_count": 8192
00:17:45.639            }
00:17:45.639          }
00:17:45.639        ]
00:17:45.639      },
00:17:45.639      {
00:17:45.639        "subsystem": "sock",
00:17:45.639        "config": [
00:17:45.639          {
00:17:45.639            "method": "sock_impl_set_options",
00:17:45.639            "params": {
00:17:45.639              "enable_ktls": false,
00:17:45.639              "enable_placement_id": 0,
00:17:45.639              "enable_quickack": false,
00:17:45.639              "enable_recv_pipe": true,
00:17:45.639              "enable_zerocopy_send_client": false,
00:17:45.639              "enable_zerocopy_send_server": true,
00:17:45.639              "impl_name": "posix",
00:17:45.639              "recv_buf_size": 2097152,
00:17:45.639              "send_buf_size": 2097152,
00:17:45.639              "tls_version": 0,
00:17:45.639              "zerocopy_threshold": 0
00:17:45.639            }
00:17:45.639          },
00:17:45.639          {
00:17:45.639            "method": "sock_impl_set_options",
00:17:45.639            "params": {
00:17:45.639              "enable_ktls": false,
00:17:45.639              "enable_placement_id": 0,
00:17:45.639              "enable_quickack": false,
00:17:45.639              "enable_recv_pipe": true,
00:17:45.639              "enable_zerocopy_send_client": false,
00:17:45.639              "enable_zerocopy_send_server": true,
00:17:45.639              "impl_name": "ssl",
00:17:45.639              "recv_buf_size": 4096,
00:17:45.639              "send_buf_size": 4096,
00:17:45.639              "tls_version": 0,
00:17:45.639              "zerocopy_threshold": 0
00:17:45.639            }
00:17:45.639          }
00:17:45.639        ]
00:17:45.639      },
00:17:45.639      {
00:17:45.639        "subsystem": "vmd",
00:17:45.639        "config": []
00:17:45.639      },
00:17:45.639      {
00:17:45.639        "subsystem": "accel",
00:17:45.639        "config": [
00:17:45.639          {
00:17:45.639            "method": "accel_set_options",
00:17:45.639            "params": {
00:17:45.639              "buf_count": 2048,
00:17:45.639              "large_cache_size": 16,
00:17:45.639              "sequence_count": 2048,
00:17:45.639              "small_cache_size": 128,
00:17:45.639              "task_count": 2048
00:17:45.639            }
00:17:45.639          }
00:17:45.639        ]
00:17:45.639      },
00:17:45.639      {
00:17:45.639        "subsystem": "bdev",
00:17:45.639        "config": [
00:17:45.639          {
00:17:45.639            "method": "bdev_set_options",
00:17:45.639            "params": {
00:17:45.639              "bdev_auto_examine": true,
00:17:45.639              "bdev_io_cache_size": 256,
00:17:45.639              "bdev_io_pool_size": 65535,
00:17:45.639              "iobuf_large_cache_size": 16,
00:17:45.639              "iobuf_small_cache_size": 128
00:17:45.639            }
00:17:45.639          },
00:17:45.639          {
00:17:45.639            "method": "bdev_raid_set_options",
00:17:45.639            "params": {
00:17:45.639              "process_window_size_kb": 1024
00:17:45.639            }
00:17:45.639          },
00:17:45.639          {
00:17:45.639            "method": "bdev_iscsi_set_options",
00:17:45.639            "params": {
00:17:45.639              "timeout_sec": 30
00:17:45.639            }
00:17:45.639          },
00:17:45.639          {
00:17:45.639            "method": "bdev_nvme_set_options",
00:17:45.639            "params": {
00:17:45.639              "action_on_timeout": "none",
00:17:45.639              "allow_accel_sequence": false,
00:17:45.639              "arbitration_burst": 0,
00:17:45.639              "bdev_retry_count": 3,
00:17:45.639              "ctrlr_loss_timeout_sec": 0,
00:17:45.639              "delay_cmd_submit": true,
00:17:45.639              "fast_io_fail_timeout_sec": 0,
00:17:45.639              "generate_uuids": false,
00:17:45.639              "high_priority_weight": 0,
00:17:45.639              "io_path_stat": false,
00:17:45.639              "io_queue_requests": 512,
00:17:45.639              "keep_alive_timeout_ms": 10000,
00:17:45.639              "low_priority_weight": 0,
00:17:45.639              "medium_priority_weight": 0,
00:17:45.639              "nvme_adminq_poll_period_us": 10000,
00:17:45.639              "nvme_ioq_poll_period_us": 0,
00:17:45.639              "reconnect_delay_sec": 0,
00:17:45.639              "timeout_admin_us": 0,
00:17:45.639              "timeout_us": 0,
00:17:45.639              "transport_ack_timeout": 0,
00:17:45.639              "transport_retry_count": 4,
00:17:45.639              "transport_tos": 0
00:17:45.639            }
00:17:45.639          },
00:17:45.639          {
00:17:45.639            "method": "bdev_nvme_attach_controller",
00:17:45.639            "params": {
00:17:45.639              "adrfam": "IPv4",
00:17:45.639              "ctrlr_loss_timeout_sec": 0,
00:17:45.639              "ddgst": false,
00:17:45.639              "fast_io_fail_timeout_sec": 0,
00:17:45.639              "hdgst": false,
00:17:45.639              "hostnqn": "nqn.2016-06.io.spdk:host1",
00:17:45.639              "name": "TLSTEST",
00:17:45.639              "prchk_guard": false,
00:17:45.639              "prchk_reftag": false,
00:17:45.639              "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt",
00:17:45.639              "reconnect_delay_sec": 0,
00:17:45.639              "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:17:45.639              "traddr": "10.0.0.2",
00:17:45.639              "trsvcid": "4420",
00:17:45.640              "trtype": "TCP"
00:17:45.640            }
00:17:45.640          },
00:17:45.640          {
00:17:45.640            "method": "bdev_nvme_set_hotplug",
00:17:45.640            "params": {
00:17:45.640              "enable": false,
00:17:45.640              "period_us": 100000
00:17:45.640            }
00:17:45.640          },
00:17:45.640          {
00:17:45.640            "method": "bdev_wait_for_examine"
00:17:45.640          }
00:17:45.640        ]
00:17:45.640      },
00:17:45.640      {
00:17:45.640        "subsystem": "nbd",
00:17:45.640        "config": []
00:17:45.640      }
00:17:45.640    ]
00:17:45.640  }'
00:17:45.640   06:28:02	-- target/tls.sh@208 -- # killprocess 78767
00:17:45.640   06:28:02	-- common/autotest_common.sh@936 -- # '[' -z 78767 ']'
00:17:45.640   06:28:02	-- common/autotest_common.sh@940 -- # kill -0 78767
00:17:45.640    06:28:02	-- common/autotest_common.sh@941 -- # uname
00:17:45.640   06:28:02	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:17:45.640    06:28:02	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78767
00:17:45.640   06:28:02	-- common/autotest_common.sh@942 -- # process_name=reactor_2
00:17:45.640   06:28:02	-- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']'
00:17:45.640  killing process with pid 78767
00:17:45.640   06:28:02	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 78767'
00:17:45.640  Received shutdown signal, test time was about 10.000000 seconds
00:17:45.640  
00:17:45.640                                                                                                  Latency(us)
00:17:45.640  
[2024-12-16T06:28:02.616Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:17:45.640  
[2024-12-16T06:28:02.616Z]  ===================================================================================================================
00:17:45.640  
[2024-12-16T06:28:02.616Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:17:45.640   06:28:02	-- common/autotest_common.sh@955 -- # kill 78767
00:17:45.640   06:28:02	-- common/autotest_common.sh@960 -- # wait 78767
00:17:45.899   06:28:02	-- target/tls.sh@209 -- # killprocess 78660
00:17:45.899   06:28:02	-- common/autotest_common.sh@936 -- # '[' -z 78660 ']'
00:17:45.899   06:28:02	-- common/autotest_common.sh@940 -- # kill -0 78660
00:17:45.899    06:28:02	-- common/autotest_common.sh@941 -- # uname
00:17:45.899   06:28:02	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:17:45.899    06:28:02	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78660
00:17:45.899   06:28:02	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:17:45.899   06:28:02	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:17:45.899  killing process with pid 78660
00:17:45.899   06:28:02	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 78660'
00:17:45.899   06:28:02	-- common/autotest_common.sh@955 -- # kill 78660
00:17:45.899   06:28:02	-- common/autotest_common.sh@960 -- # wait 78660
00:17:46.466   06:28:03	-- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62
00:17:46.466    06:28:03	-- target/tls.sh@212 -- # echo '{
00:17:46.466    "subsystems": [
00:17:46.466      {
00:17:46.466        "subsystem": "iobuf",
00:17:46.466        "config": [
00:17:46.466          {
00:17:46.466            "method": "iobuf_set_options",
00:17:46.466            "params": {
00:17:46.466              "large_bufsize": 135168,
00:17:46.466              "large_pool_count": 1024,
00:17:46.466              "small_bufsize": 8192,
00:17:46.466              "small_pool_count": 8192
00:17:46.466            }
00:17:46.466          }
00:17:46.466        ]
00:17:46.466      },
00:17:46.466      {
00:17:46.466        "subsystem": "sock",
00:17:46.466        "config": [
00:17:46.466          {
00:17:46.466            "method": "sock_impl_set_options",
00:17:46.466            "params": {
00:17:46.466              "enable_ktls": false,
00:17:46.466              "enable_placement_id": 0,
00:17:46.466              "enable_quickack": false,
00:17:46.466              "enable_recv_pipe": true,
00:17:46.466              "enable_zerocopy_send_client": false,
00:17:46.466              "enable_zerocopy_send_server": true,
00:17:46.466              "impl_name": "posix",
00:17:46.466              "recv_buf_size": 2097152,
00:17:46.466              "send_buf_size": 2097152,
00:17:46.466              "tls_version": 0,
00:17:46.466              "zerocopy_threshold": 0
00:17:46.466            }
00:17:46.466          },
00:17:46.466          {
00:17:46.466            "method": "sock_impl_set_options",
00:17:46.466            "params": {
00:17:46.466              "enable_ktls": false,
00:17:46.466              "enable_placement_id": 0,
00:17:46.466              "enable_quickack": false,
00:17:46.466              "enable_recv_pipe": true,
00:17:46.466              "enable_zerocopy_send_client": false,
00:17:46.466              "enable_zerocopy_send_server": true,
00:17:46.466              "impl_name": "ssl",
00:17:46.466              "recv_buf_size": 4096,
00:17:46.466              "send_buf_size": 4096,
00:17:46.466              "tls_version": 0,
00:17:46.466              "zerocopy_threshold": 0
00:17:46.466            }
00:17:46.466          }
00:17:46.466        ]
00:17:46.466      },
00:17:46.466      {
00:17:46.466        "subsystem": "vmd",
00:17:46.466        "config": []
00:17:46.466      },
00:17:46.466      {
00:17:46.466        "subsystem": "accel",
00:17:46.466        "config": [
00:17:46.466          {
00:17:46.466            "method": "accel_set_options",
00:17:46.466            "params": {
00:17:46.466              "buf_count": 2048,
00:17:46.466              "large_cache_size": 16,
00:17:46.466              "sequence_count": 2048,
00:17:46.466              "small_cache_size": 128,
00:17:46.466              "task_count": 2048
00:17:46.466            }
00:17:46.466          }
00:17:46.466        ]
00:17:46.466      },
00:17:46.466      {
00:17:46.466        "subsystem": "bdev",
00:17:46.466        "config": [
00:17:46.466          {
00:17:46.466            "method": "bdev_set_options",
00:17:46.466            "params": {
00:17:46.466              "bdev_auto_examine": true,
00:17:46.466              "bdev_io_cache_size": 256,
00:17:46.466              "bdev_io_pool_size": 65535,
00:17:46.466              "iobuf_large_cache_size": 16,
00:17:46.466              "iobuf_small_cache_size": 128
00:17:46.466            }
00:17:46.466          },
00:17:46.466          {
00:17:46.466            "method": "bdev_raid_set_options",
00:17:46.466            "params": {
00:17:46.466              "process_window_size_kb": 1024
00:17:46.466            }
00:17:46.466          },
00:17:46.467          {
00:17:46.467            "method": "bdev_iscsi_set_options",
00:17:46.467            "params": {
00:17:46.467              "timeout_sec": 30
00:17:46.467            }
00:17:46.467          },
00:17:46.467          {
00:17:46.467            "method": "bdev_nvme_set_options",
00:17:46.467            "params": {
00:17:46.467              "action_on_timeout": "none",
00:17:46.467              "allow_accel_sequence": false,
00:17:46.467              "arbitration_burst": 0,
00:17:46.467              "bdev_retry_count": 3,
00:17:46.467              "ctrlr_loss_timeout_sec": 0,
00:17:46.467              "delay_cmd_submit": true,
00:17:46.467              "fast_io_fail_timeout_sec": 0,
00:17:46.467              "generate_uuids": false,
00:17:46.467              "high_priority_weight": 0,
00:17:46.467              "io_path_stat": false,
00:17:46.467              "io_queue_requests": 0,
00:17:46.467              "keep_alive_timeout_ms": 10000,
00:17:46.467              "low_priority_weight": 0,
00:17:46.467              "medium_priority_weight": 0,
00:17:46.467              "nvme_adminq_poll_period_us": 10000,
00:17:46.467              "nvme_ioq_poll_period_us": 0,
00:17:46.467              "reconnect_delay_sec": 0,
00:17:46.467              "timeout_admin_us": 0,
00:17:46.467              "timeout_us": 0,
00:17:46.467              "transport_ack_timeout": 0,
00:17:46.467              "transport_retry_count": 4,
00:17:46.467              "transport_tos": 0
00:17:46.467            }
00:17:46.467          },
00:17:46.467          {
00:17:46.467            "method": "bdev_nvme_set_hotplug",
00:17:46.467            "params": {
00:17:46.467              "enable": false,
00:17:46.467              "period_us": 100000
00:17:46.467            }
00:17:46.467          },
00:17:46.467          {
00:17:46.467            "method": "bdev_malloc_create",
00:17:46.467            "params": {
00:17:46.467              "block_size": 4096,
00:17:46.467              "name": "malloc0",
00:17:46.467              "num_blocks": 8192,
00:17:46.467              "optimal_io_boundary": 0,
00:17:46.467              "physical_block_size": 4096,
00:17:46.467              "uuid": "d69ebdc1-e597-46d9-860e-ea3db1ac9573"
00:17:46.467            }
00:17:46.467          },
00:17:46.467          {
00:17:46.467            "method": "bdev_wait_for_examine"
00:17:46.467          }
00:17:46.467        ]
00:17:46.467      },
00:17:46.467      {
00:17:46.467        "subsystem": "nbd",
00:17:46.467        "config": []
00:17:46.467      },
00:17:46.467      {
00:17:46.467        "subsystem": "sch 06:28:03	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:17:46.467  eduler",
00:17:46.467        "config": [
00:17:46.467          {
00:17:46.467            "method": "framework_set_scheduler",
00:17:46.467            "params": {
00:17:46.467              "name": "static"
00:17:46.467            }
00:17:46.467          }
00:17:46.467        ]
00:17:46.467      },
00:17:46.467      {
00:17:46.467        "subsystem": "nvmf",
00:17:46.467        "config": [
00:17:46.467          {
00:17:46.467            "method": "nvmf_set_config",
00:17:46.467            "params": {
00:17:46.467              "admin_cmd_passthru": {
00:17:46.467                "identify_ctrlr": false
00:17:46.467              },
00:17:46.467              "discovery_filter": "match_any"
00:17:46.467            }
00:17:46.467          },
00:17:46.467          {
00:17:46.467            "method": "nvmf_set_max_subsystems",
00:17:46.467            "params": {
00:17:46.467              "max_subsystems": 1024
00:17:46.467            }
00:17:46.467          },
00:17:46.467          {
00:17:46.467            "method": "nvmf_set_crdt",
00:17:46.467            "params": {
00:17:46.467              "crdt1": 0,
00:17:46.467              "crdt2": 0,
00:17:46.467              "crdt3": 0
00:17:46.467            }
00:17:46.467          },
00:17:46.467          {
00:17:46.467            "method": "nvmf_create_transport",
00:17:46.467            "params": {
00:17:46.467              "abort_timeout_sec": 1,
00:17:46.467              "buf_cache_size": 4294967295,
00:17:46.467              "c2h_success": false,
00:17:46.467              "dif_insert_or_strip": false,
00:17:46.467              "in_capsule_data_size": 4096,
00:17:46.467              "io_unit_size": 131072,
00:17:46.467              "max_aq_depth": 128,
00:17:46.467              "max_io_qpairs_per_ctrlr": 127,
00:17:46.467              "max_io_size": 131072,
00:17:46.467              "max_queue_depth": 128,
00:17:46.467              "num_shared_buffers": 511,
00:17:46.467              "sock_priority": 0,
00:17:46.467              "trtype": "TCP",
00:17:46.467              "zcopy": false
00:17:46.467            }
00:17:46.467          },
00:17:46.467          {
00:17:46.467            "method": "nvmf_create_subsystem",
00:17:46.467            "params": {
00:17:46.467              "allow_any_host": false,
00:17:46.467              "ana_reporting": false,
00:17:46.467              "max_cntlid": 65519,
00:17:46.467              "max_namespaces": 10,
00:17:46.467              "min_cntlid": 1,
00:17:46.467              "model_number": "SPDK bdev Controller",
00:17:46.467              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:17:46.467              "serial_number": "SPDK00000000000001"
00:17:46.467            }
00:17:46.467          },
00:17:46.467          {
00:17:46.467            "method": "nvmf_subsystem_add_host",
00:17:46.467            "params": {
00:17:46.467              "host": "nqn.2016-06.io.spdk:host1",
00:17:46.467              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:17:46.467              "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt"
00:17:46.467            }
00:17:46.467          },
00:17:46.467          {
00:17:46.467            "method": "nvmf_subsystem_add_ns",
00:17:46.467            "params": {
00:17:46.467              "namespace": {
00:17:46.467                "bdev_name": "malloc0",
00:17:46.467                "nguid": "D69EBDC1E59746D9860EEA3DB1AC9573",
00:17:46.467                "nsid": 1,
00:17:46.467                "uuid": "d69ebdc1-e597-46d9-860e-ea3db1ac9573"
00:17:46.467              },
00:17:46.467              "nqn": "nqn.2016-06.io.spdk:cnode1"
00:17:46.467            }
00:17:46.467          },
00:17:46.467          {
00:17:46.467            "method": "nvmf_subsystem_add_listener",
00:17:46.467            "params": {
00:17:46.467              "listen_address": {
00:17:46.467                "adrfam": "IPv4",
00:17:46.467                "traddr": "10.0.0.2",
00:17:46.467                "trsvcid": "4420",
00:17:46.467                "trtype": "TCP"
00:17:46.467              },
00:17:46.467              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:17:46.467              "secure_channel": true
00:17:46.467            }
00:17:46.467          }
00:17:46.467        ]
00:17:46.467      }
00:17:46.467    ]
00:17:46.467  }'
00:17:46.467   06:28:03	-- common/autotest_common.sh@722 -- # xtrace_disable
00:17:46.467   06:28:03	-- common/autotest_common.sh@10 -- # set +x
00:17:46.467   06:28:03	-- nvmf/common.sh@469 -- # nvmfpid=78841
00:17:46.467   06:28:03	-- nvmf/common.sh@470 -- # waitforlisten 78841
00:17:46.467   06:28:03	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62
00:17:46.467   06:28:03	-- common/autotest_common.sh@829 -- # '[' -z 78841 ']'
00:17:46.467   06:28:03	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:46.467   06:28:03	-- common/autotest_common.sh@834 -- # local max_retries=100
00:17:46.467  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:46.467   06:28:03	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:46.467   06:28:03	-- common/autotest_common.sh@838 -- # xtrace_disable
00:17:46.467   06:28:03	-- common/autotest_common.sh@10 -- # set +x
00:17:46.467  [2024-12-16 06:28:03.241064] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:17:46.467  [2024-12-16 06:28:03.241159] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:17:46.467  [2024-12-16 06:28:03.378462] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:46.726  [2024-12-16 06:28:03.459002] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:17:46.726  [2024-12-16 06:28:03.459155] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:17:46.726  [2024-12-16 06:28:03.459166] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:17:46.726  [2024-12-16 06:28:03.459175] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:17:46.726  [2024-12-16 06:28:03.459207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:17:46.984  [2024-12-16 06:28:03.704625] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:17:46.984  [2024-12-16 06:28:03.736586] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:17:46.984  [2024-12-16 06:28:03.736815] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:17:47.243   06:28:04	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:17:47.243   06:28:04	-- common/autotest_common.sh@862 -- # return 0
00:17:47.243   06:28:04	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:17:47.243   06:28:04	-- common/autotest_common.sh@728 -- # xtrace_disable
00:17:47.243   06:28:04	-- common/autotest_common.sh@10 -- # set +x
00:17:47.243   06:28:04	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:17:47.243   06:28:04	-- target/tls.sh@216 -- # bdevperf_pid=78884
00:17:47.243   06:28:04	-- target/tls.sh@217 -- # waitforlisten 78884 /var/tmp/bdevperf.sock
00:17:47.243   06:28:04	-- common/autotest_common.sh@829 -- # '[' -z 78884 ']'
00:17:47.243   06:28:04	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:17:47.243   06:28:04	-- common/autotest_common.sh@834 -- # local max_retries=100
00:17:47.243   06:28:04	-- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63
00:17:47.243  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:17:47.243   06:28:04	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:17:47.243    06:28:04	-- target/tls.sh@213 -- # echo '{
00:17:47.243    "subsystems": [
00:17:47.243      {
00:17:47.243        "subsystem": "iobuf",
00:17:47.243        "config": [
00:17:47.243          {
00:17:47.243            "method": "iobuf_set_options",
00:17:47.243            "params": {
00:17:47.243              "large_bufsize": 135168,
00:17:47.243              "large_pool_count": 1024,
00:17:47.243              "small_bufsize": 8192,
00:17:47.243              "small_pool_count": 8192
00:17:47.243            }
00:17:47.243          }
00:17:47.243        ]
00:17:47.243      },
00:17:47.243      {
00:17:47.243        "subsystem": "sock",
00:17:47.243        "config": [
00:17:47.243          {
00:17:47.243            "method": "sock_impl_set_options",
00:17:47.243            "params": {
00:17:47.243              "enable_ktls": false,
00:17:47.243              "enable_placement_id": 0,
00:17:47.243              "enable_quickack": false,
00:17:47.243              "enable_recv_pipe": true,
00:17:47.243              "enable_zerocopy_send_client": false,
00:17:47.243              "enable_zerocopy_send_server": true,
00:17:47.243              "impl_name": "posix",
00:17:47.243              "recv_buf_size": 2097152,
00:17:47.243              "send_buf_size": 2097152,
00:17:47.243              "tls_version": 0,
00:17:47.243              "zerocopy_threshold": 0
00:17:47.243            }
00:17:47.243          },
00:17:47.243          {
00:17:47.243            "method": "sock_impl_set_options",
00:17:47.243            "params": {
00:17:47.243              "enable_ktls": false,
00:17:47.243              "enable_placement_id": 0,
00:17:47.243              "enable_quickack": false,
00:17:47.243              "enable_recv_pipe": true,
00:17:47.243              "enable_zerocopy_send_client": false,
00:17:47.243              "enable_zerocopy_send_server": true,
00:17:47.243              "impl_name": "ssl",
00:17:47.243              "recv_buf_size": 4096,
00:17:47.243              "send_buf_size": 4096,
00:17:47.243              "tls_version": 0,
00:17:47.243              "zerocopy_threshold": 0
00:17:47.243            }
00:17:47.243          }
00:17:47.243        ]
00:17:47.243      },
00:17:47.243      {
00:17:47.243        "subsystem": "vmd",
00:17:47.243        "config": []
00:17:47.243      },
00:17:47.243      {
00:17:47.243        "subsystem": "accel",
00:17:47.243        "config": [
00:17:47.243          {
00:17:47.243            "method": "accel_set_options",
00:17:47.243            "params": {
00:17:47.243              "buf_count": 2048,
00:17:47.243              "large_cache_size": 16,
00:17:47.243              "sequence_count": 2048,
00:17:47.243              "small_cache_size": 128,
00:17:47.243              "task_count": 2048
00:17:47.243            }
00:17:47.243          }
00:17:47.243        ]
00:17:47.243      },
00:17:47.243      {
00:17:47.243        "subsystem": "bdev",
00:17:47.243        "config": [
00:17:47.243          {
00:17:47.243            "method": "bdev_set_options",
00:17:47.243            "params": {
00:17:47.243              "bdev_auto_examine": true,
00:17:47.243              "bdev_io_cache_size": 256,
00:17:47.243              "bdev_io_pool_size": 65535,
00:17:47.243              "iobuf_large_cache_size": 16,
00:17:47.243              "iobuf_small_cache_size": 128
00:17:47.243            }
00:17:47.243          },
00:17:47.243          {
00:17:47.243            "method": "bdev_raid_set_options",
00:17:47.243            "params": {
00:17:47.243              "process_window_size_kb": 1024
00:17:47.243            }
00:17:47.243          },
00:17:47.243          {
00:17:47.243            "method": "bdev_iscsi_set_options",
00:17:47.243            "params": {
00:17:47.243              "timeout_sec": 30
00:17:47.243            }
00:17:47.243          },
00:17:47.243          {
00:17:47.243            "method": "bdev_nvme_set_options",
00:17:47.243            "params": {
00:17:47.243              "action_on_timeout": "none",
00:17:47.243              "allow_accel_sequence": false,
00:17:47.243              "arbitration_burst": 0,
00:17:47.243              "bdev_retry_count": 3,
00:17:47.243              "ctrlr_loss_timeout_sec": 0,
00:17:47.243              "delay_cmd_submit": true,
00:17:47.243              "fast_io_fail_timeout_sec": 0,
00:17:47.243              "generate_uuids": false,
00:17:47.243              "high_priority_weight": 0,
00:17:47.244              "io_path_stat": false,
00:17:47.244              "io_queue_requests": 512,
00:17:47.244              "keep_alive_timeout_ms": 10000,
00:17:47.244              "low_priority_weight": 0,
00:17:47.244              "medium_priority_weight": 0,
00:17:47.244              "nvme_adminq_poll_period_us": 10000,
00:17:47.244              "nvme_ioq_poll_period_us": 0,
00:17:47.244              "reconnect_delay_sec": 0,
00:17:47.244              "timeout_admin_us": 0,
00:17:47.244              "timeout_us": 0,
00:17:47.244              "transport_ack_timeout": 0,
00:17:47.244              "transport_retry_count": 4,
00:17:47.244              "transport_tos": 0
00:17:47.244            }
00:17:47.244          },
00:17:47.244          {
00:17:47.244            "method": "bdev_nvme_attach_controller",
00:17:47.244            "params": {
00:17:47.244              "adrfam": "IPv4",
00:17:47.244              "ctrlr_loss_timeout_sec": 0,
00:17:47.244              "ddgst": false,
00:17:47.244              "fast_io_fail_timeout_sec": 0,
00:17:47.244              "hdgst": false,
00:17:47.244              "hostnqn": "nqn.2016-06.io.spdk:host1",
00:17:47.244              "name": "TLSTEST",
00:17:47.244              "prchk_guard": false,
00:17:47.244              "prchk_reftag": false,
00:17:47.244              "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt",
00:17:47.244              "reconnect_delay_sec": 0,
00:17:47.244              "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:17:47.244              "traddr": "10.0.0.2",
00:17:47.244              "trsvcid": "4420",
00:17:47.244              "trtype": "TCP"
00:17:47.244            }
00:17:47.244          },
00:17:47.244          {
00:17:47.244            "method": "bdev_nvme_set_hotplug",
00:17:47.244            "params": {
00:17:47.244              "enable": false,
00:17:47.244              "period_us": 100000
00:17:47.244            }
00:17:47.244          },
00:17:47.244          {
00:17:47.244            "method": "bdev_wait_for_examine"
00:17:47.244          }
00:17:47.244        ]
00:17:47.244      },
00:17:47.244      {
00:17:47.244        "subsystem": "nbd",
00:17:47.244        "config": []
00:17:47.244      }
00:17:47.244    ]
00:17:47.244  }'
00:17:47.244   06:28:04	-- common/autotest_common.sh@838 -- # xtrace_disable
00:17:47.244   06:28:04	-- common/autotest_common.sh@10 -- # set +x
00:17:47.502  [2024-12-16 06:28:04.251024] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:17:47.502  [2024-12-16 06:28:04.251112] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78884 ]
00:17:47.502  [2024-12-16 06:28:04.390318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:47.761  [2024-12-16 06:28:04.483643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:17:47.761  [2024-12-16 06:28:04.632466] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:17:48.328   06:28:05	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:17:48.328   06:28:05	-- common/autotest_common.sh@862 -- # return 0
00:17:48.328   06:28:05	-- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests
00:17:48.586  Running I/O for 10 seconds...
00:17:58.559  
00:17:58.559                                                                                                  Latency(us)
00:17:58.559  
[2024-12-16T06:28:15.535Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:17:58.559  
[2024-12-16T06:28:15.535Z]  Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:17:58.559  	 Verification LBA range: start 0x0 length 0x2000
00:17:58.559  	 TLSTESTn1           :      10.01    6545.08      25.57       0.00     0.00   19526.43    4200.26   19899.11
00:17:58.559  
[2024-12-16T06:28:15.535Z]  ===================================================================================================================
00:17:58.559  
[2024-12-16T06:28:15.535Z]  Total                       :               6545.08      25.57       0.00     0.00   19526.43    4200.26   19899.11
00:17:58.559  0
00:17:58.559   06:28:15	-- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:17:58.559   06:28:15	-- target/tls.sh@223 -- # killprocess 78884
00:17:58.559   06:28:15	-- common/autotest_common.sh@936 -- # '[' -z 78884 ']'
00:17:58.559   06:28:15	-- common/autotest_common.sh@940 -- # kill -0 78884
00:17:58.559    06:28:15	-- common/autotest_common.sh@941 -- # uname
00:17:58.559   06:28:15	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:17:58.559    06:28:15	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78884
00:17:58.559   06:28:15	-- common/autotest_common.sh@942 -- # process_name=reactor_2
00:17:58.559   06:28:15	-- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']'
00:17:58.559  killing process with pid 78884
00:17:58.559   06:28:15	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 78884'
00:17:58.559  Received shutdown signal, test time was about 10.000000 seconds
00:17:58.559  
00:17:58.559                                                                                                  Latency(us)
00:17:58.559  
[2024-12-16T06:28:15.535Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:17:58.559  
[2024-12-16T06:28:15.535Z]  ===================================================================================================================
00:17:58.559  
[2024-12-16T06:28:15.535Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:17:58.559   06:28:15	-- common/autotest_common.sh@955 -- # kill 78884
00:17:58.559   06:28:15	-- common/autotest_common.sh@960 -- # wait 78884
00:17:58.818   06:28:15	-- target/tls.sh@224 -- # killprocess 78841
00:17:58.818   06:28:15	-- common/autotest_common.sh@936 -- # '[' -z 78841 ']'
00:17:58.818   06:28:15	-- common/autotest_common.sh@940 -- # kill -0 78841
00:17:58.818    06:28:15	-- common/autotest_common.sh@941 -- # uname
00:17:58.818   06:28:15	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:17:58.818    06:28:15	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78841
00:17:58.818   06:28:15	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:17:58.818   06:28:15	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:17:58.818  killing process with pid 78841
00:17:58.818   06:28:15	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 78841'
00:17:58.818   06:28:15	-- common/autotest_common.sh@955 -- # kill 78841
00:17:58.818   06:28:15	-- common/autotest_common.sh@960 -- # wait 78841
00:17:59.077   06:28:15	-- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT
00:17:59.077   06:28:15	-- target/tls.sh@227 -- # cleanup
00:17:59.077   06:28:15	-- target/tls.sh@15 -- # process_shm --id 0
00:17:59.077   06:28:15	-- common/autotest_common.sh@806 -- # type=--id
00:17:59.077   06:28:15	-- common/autotest_common.sh@807 -- # id=0
00:17:59.077   06:28:15	-- common/autotest_common.sh@808 -- # '[' --id = --pid ']'
00:17:59.077    06:28:15	-- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n'
00:17:59.077   06:28:15	-- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0
00:17:59.077   06:28:15	-- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]]
00:17:59.077   06:28:15	-- common/autotest_common.sh@818 -- # for n in $shm_files
00:17:59.077   06:28:15	-- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0
00:17:59.077  nvmf_trace.0
00:17:59.077   06:28:15	-- common/autotest_common.sh@821 -- # return 0
00:17:59.077   06:28:16	-- target/tls.sh@16 -- # killprocess 78884
00:17:59.077   06:28:16	-- common/autotest_common.sh@936 -- # '[' -z 78884 ']'
00:17:59.077   06:28:16	-- common/autotest_common.sh@940 -- # kill -0 78884
00:17:59.077  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (78884) - No such process
00:17:59.077  Process with pid 78884 is not found
00:17:59.077   06:28:16	-- common/autotest_common.sh@963 -- # echo 'Process with pid 78884 is not found'
00:17:59.077   06:28:16	-- target/tls.sh@17 -- # nvmftestfini
00:17:59.077   06:28:16	-- nvmf/common.sh@476 -- # nvmfcleanup
00:17:59.077   06:28:16	-- nvmf/common.sh@116 -- # sync
00:17:59.077   06:28:16	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:17:59.336   06:28:16	-- nvmf/common.sh@119 -- # set +e
00:17:59.336   06:28:16	-- nvmf/common.sh@120 -- # for i in {1..20}
00:17:59.336   06:28:16	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:17:59.336  rmmod nvme_tcp
00:17:59.336  rmmod nvme_fabrics
00:17:59.336  rmmod nvme_keyring
00:17:59.336   06:28:16	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:17:59.336   06:28:16	-- nvmf/common.sh@123 -- # set -e
00:17:59.336   06:28:16	-- nvmf/common.sh@124 -- # return 0
00:17:59.336   06:28:16	-- nvmf/common.sh@477 -- # '[' -n 78841 ']'
00:17:59.336   06:28:16	-- nvmf/common.sh@478 -- # killprocess 78841
00:17:59.336   06:28:16	-- common/autotest_common.sh@936 -- # '[' -z 78841 ']'
00:17:59.336   06:28:16	-- common/autotest_common.sh@940 -- # kill -0 78841
00:17:59.336  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (78841) - No such process
00:17:59.336  Process with pid 78841 is not found
00:17:59.336   06:28:16	-- common/autotest_common.sh@963 -- # echo 'Process with pid 78841 is not found'
00:17:59.336   06:28:16	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:17:59.336   06:28:16	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:17:59.336   06:28:16	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:17:59.336   06:28:16	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:17:59.336   06:28:16	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:17:59.336   06:28:16	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:17:59.336   06:28:16	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:17:59.336    06:28:16	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:17:59.336   06:28:16	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:17:59.336   06:28:16	-- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt
00:17:59.336  ************************************
00:17:59.336  END TEST nvmf_tls
00:17:59.336  ************************************
00:17:59.336  
00:17:59.336  real	1m11.288s
00:17:59.336  user	1m45.638s
00:17:59.336  sys	0m26.903s
00:17:59.336   06:28:16	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:17:59.336   06:28:16	-- common/autotest_common.sh@10 -- # set +x
00:17:59.336   06:28:16	-- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp
00:17:59.336   06:28:16	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:17:59.336   06:28:16	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:17:59.336   06:28:16	-- common/autotest_common.sh@10 -- # set +x
00:17:59.336  ************************************
00:17:59.336  START TEST nvmf_fips
00:17:59.336  ************************************
00:17:59.336   06:28:16	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp
00:17:59.336  * Looking for test storage...
00:17:59.336  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips
00:17:59.336    06:28:16	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:17:59.336     06:28:16	-- common/autotest_common.sh@1690 -- # lcov --version
00:17:59.336     06:28:16	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:17:59.597    06:28:16	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:17:59.597    06:28:16	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:17:59.597    06:28:16	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:17:59.597    06:28:16	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:17:59.597    06:28:16	-- scripts/common.sh@335 -- # IFS=.-:
00:17:59.597    06:28:16	-- scripts/common.sh@335 -- # read -ra ver1
00:17:59.597    06:28:16	-- scripts/common.sh@336 -- # IFS=.-:
00:17:59.597    06:28:16	-- scripts/common.sh@336 -- # read -ra ver2
00:17:59.597    06:28:16	-- scripts/common.sh@337 -- # local 'op=<'
00:17:59.597    06:28:16	-- scripts/common.sh@339 -- # ver1_l=2
00:17:59.597    06:28:16	-- scripts/common.sh@340 -- # ver2_l=1
00:17:59.597    06:28:16	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:17:59.597    06:28:16	-- scripts/common.sh@343 -- # case "$op" in
00:17:59.597    06:28:16	-- scripts/common.sh@344 -- # : 1
00:17:59.597    06:28:16	-- scripts/common.sh@363 -- # (( v = 0 ))
00:17:59.597    06:28:16	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:17:59.597     06:28:16	-- scripts/common.sh@364 -- # decimal 1
00:17:59.597     06:28:16	-- scripts/common.sh@352 -- # local d=1
00:17:59.597     06:28:16	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:59.597     06:28:16	-- scripts/common.sh@354 -- # echo 1
00:17:59.597    06:28:16	-- scripts/common.sh@364 -- # ver1[v]=1
00:17:59.597     06:28:16	-- scripts/common.sh@365 -- # decimal 2
00:17:59.597     06:28:16	-- scripts/common.sh@352 -- # local d=2
00:17:59.597     06:28:16	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:17:59.597     06:28:16	-- scripts/common.sh@354 -- # echo 2
00:17:59.597    06:28:16	-- scripts/common.sh@365 -- # ver2[v]=2
00:17:59.597    06:28:16	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:17:59.597    06:28:16	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:17:59.597    06:28:16	-- scripts/common.sh@367 -- # return 0
00:17:59.597    06:28:16	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:17:59.597    06:28:16	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:17:59.597  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:59.597  		--rc genhtml_branch_coverage=1
00:17:59.597  		--rc genhtml_function_coverage=1
00:17:59.597  		--rc genhtml_legend=1
00:17:59.597  		--rc geninfo_all_blocks=1
00:17:59.597  		--rc geninfo_unexecuted_blocks=1
00:17:59.597  		
00:17:59.597  		'
00:17:59.597    06:28:16	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:17:59.597  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:59.597  		--rc genhtml_branch_coverage=1
00:17:59.597  		--rc genhtml_function_coverage=1
00:17:59.597  		--rc genhtml_legend=1
00:17:59.597  		--rc geninfo_all_blocks=1
00:17:59.597  		--rc geninfo_unexecuted_blocks=1
00:17:59.597  		
00:17:59.597  		'
00:17:59.597    06:28:16	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:17:59.597  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:59.597  		--rc genhtml_branch_coverage=1
00:17:59.597  		--rc genhtml_function_coverage=1
00:17:59.597  		--rc genhtml_legend=1
00:17:59.597  		--rc geninfo_all_blocks=1
00:17:59.597  		--rc geninfo_unexecuted_blocks=1
00:17:59.597  		
00:17:59.597  		'
00:17:59.597    06:28:16	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:17:59.597  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:59.597  		--rc genhtml_branch_coverage=1
00:17:59.597  		--rc genhtml_function_coverage=1
00:17:59.597  		--rc genhtml_legend=1
00:17:59.597  		--rc geninfo_all_blocks=1
00:17:59.597  		--rc geninfo_unexecuted_blocks=1
00:17:59.597  		
00:17:59.597  		'
00:17:59.597   06:28:16	-- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:17:59.597     06:28:16	-- nvmf/common.sh@7 -- # uname -s
00:17:59.597    06:28:16	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:17:59.597    06:28:16	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:17:59.597    06:28:16	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:17:59.597    06:28:16	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:17:59.597    06:28:16	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:17:59.597    06:28:16	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:17:59.597    06:28:16	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:17:59.597    06:28:16	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:17:59.597    06:28:16	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:17:59.597     06:28:16	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:17:59.597    06:28:16	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:17:59.597    06:28:16	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:17:59.597    06:28:16	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:17:59.597    06:28:16	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:17:59.597    06:28:16	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:17:59.597    06:28:16	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:17:59.597     06:28:16	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:17:59.597     06:28:16	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:17:59.597     06:28:16	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:17:59.597      06:28:16	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:59.597      06:28:16	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:59.598      06:28:16	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:59.598      06:28:16	-- paths/export.sh@5 -- # export PATH
00:17:59.598      06:28:16	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:59.598    06:28:16	-- nvmf/common.sh@46 -- # : 0
00:17:59.598    06:28:16	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:17:59.598    06:28:16	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:17:59.598    06:28:16	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:17:59.598    06:28:16	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:17:59.598    06:28:16	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:17:59.598    06:28:16	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:17:59.598    06:28:16	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:17:59.598    06:28:16	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:17:59.598   06:28:16	-- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:17:59.598   06:28:16	-- fips/fips.sh@89 -- # check_openssl_version
00:17:59.598   06:28:16	-- fips/fips.sh@83 -- # local target=3.0.0
00:17:59.598    06:28:16	-- fips/fips.sh@85 -- # awk '{print $2}'
00:17:59.598    06:28:16	-- fips/fips.sh@85 -- # openssl version
00:17:59.598   06:28:16	-- fips/fips.sh@85 -- # ge 3.1.1 3.0.0
00:17:59.598   06:28:16	-- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0
00:17:59.598   06:28:16	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:17:59.598   06:28:16	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:17:59.598   06:28:16	-- scripts/common.sh@335 -- # IFS=.-:
00:17:59.598   06:28:16	-- scripts/common.sh@335 -- # read -ra ver1
00:17:59.598   06:28:16	-- scripts/common.sh@336 -- # IFS=.-:
00:17:59.598   06:28:16	-- scripts/common.sh@336 -- # read -ra ver2
00:17:59.598   06:28:16	-- scripts/common.sh@337 -- # local 'op=>='
00:17:59.598   06:28:16	-- scripts/common.sh@339 -- # ver1_l=3
00:17:59.598   06:28:16	-- scripts/common.sh@340 -- # ver2_l=3
00:17:59.598   06:28:16	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:17:59.598   06:28:16	-- scripts/common.sh@343 -- # case "$op" in
00:17:59.598   06:28:16	-- scripts/common.sh@347 -- # : 1
00:17:59.598   06:28:16	-- scripts/common.sh@363 -- # (( v = 0 ))
00:17:59.598   06:28:16	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:17:59.598    06:28:16	-- scripts/common.sh@364 -- # decimal 3
00:17:59.598    06:28:16	-- scripts/common.sh@352 -- # local d=3
00:17:59.598    06:28:16	-- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]]
00:17:59.598    06:28:16	-- scripts/common.sh@354 -- # echo 3
00:17:59.598   06:28:16	-- scripts/common.sh@364 -- # ver1[v]=3
00:17:59.598    06:28:16	-- scripts/common.sh@365 -- # decimal 3
00:17:59.598    06:28:16	-- scripts/common.sh@352 -- # local d=3
00:17:59.598    06:28:16	-- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]]
00:17:59.598    06:28:16	-- scripts/common.sh@354 -- # echo 3
00:17:59.598   06:28:16	-- scripts/common.sh@365 -- # ver2[v]=3
00:17:59.598   06:28:16	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:17:59.598   06:28:16	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:17:59.598   06:28:16	-- scripts/common.sh@363 -- # (( v++ ))
00:17:59.598   06:28:16	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:17:59.598    06:28:16	-- scripts/common.sh@364 -- # decimal 1
00:17:59.598    06:28:16	-- scripts/common.sh@352 -- # local d=1
00:17:59.598    06:28:16	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:59.598    06:28:16	-- scripts/common.sh@354 -- # echo 1
00:17:59.598   06:28:16	-- scripts/common.sh@364 -- # ver1[v]=1
00:17:59.598    06:28:16	-- scripts/common.sh@365 -- # decimal 0
00:17:59.598    06:28:16	-- scripts/common.sh@352 -- # local d=0
00:17:59.598    06:28:16	-- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:59.598    06:28:16	-- scripts/common.sh@354 -- # echo 0
00:17:59.598   06:28:16	-- scripts/common.sh@365 -- # ver2[v]=0
00:17:59.598   06:28:16	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:17:59.598   06:28:16	-- scripts/common.sh@366 -- # return 0
00:17:59.598    06:28:16	-- fips/fips.sh@95 -- # openssl info -modulesdir
00:17:59.598   06:28:16	-- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]]
00:17:59.598    06:28:16	-- fips/fips.sh@100 -- # openssl fipsinstall -help
00:17:59.598   06:28:16	-- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode'
00:17:59.598   06:28:16	-- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]]
00:17:59.598   06:28:16	-- fips/fips.sh@104 -- # export callback=build_openssl_config
00:17:59.598   06:28:16	-- fips/fips.sh@104 -- # callback=build_openssl_config
00:17:59.598   06:28:16	-- fips/fips.sh@113 -- # build_openssl_config
00:17:59.598   06:28:16	-- fips/fips.sh@37 -- # cat
00:17:59.598   06:28:16	-- fips/fips.sh@57 -- # [[ ! -t 0 ]]
00:17:59.598   06:28:16	-- fips/fips.sh@58 -- # cat -
00:17:59.598   06:28:16	-- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf
00:17:59.598   06:28:16	-- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf
00:17:59.598   06:28:16	-- fips/fips.sh@116 -- # mapfile -t providers
00:17:59.598    06:28:16	-- fips/fips.sh@116 -- # openssl list -providers
00:17:59.598    06:28:16	-- fips/fips.sh@116 -- # grep name
00:17:59.598   06:28:16	-- fips/fips.sh@120 -- # (( 2 != 2 ))
00:17:59.598   06:28:16	-- fips/fips.sh@120 -- # [[     name: openssl base provider != *base* ]]
00:17:59.598   06:28:16	-- fips/fips.sh@120 -- # [[     name: red hat enterprise linux 9 - openssl fips provider != *fips* ]]
00:17:59.598   06:28:16	-- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62
00:17:59.598    06:28:16	-- fips/fips.sh@127 -- # :
00:17:59.598   06:28:16	-- common/autotest_common.sh@650 -- # local es=0
00:17:59.598   06:28:16	-- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62
00:17:59.598   06:28:16	-- common/autotest_common.sh@638 -- # local arg=openssl
00:17:59.598   06:28:16	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:59.598    06:28:16	-- common/autotest_common.sh@642 -- # type -t openssl
00:17:59.598   06:28:16	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:59.598    06:28:16	-- common/autotest_common.sh@644 -- # type -P openssl
00:17:59.598   06:28:16	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:59.598   06:28:16	-- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl
00:17:59.598   06:28:16	-- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]]
00:17:59.598   06:28:16	-- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62
00:17:59.890  Error setting digest
00:17:59.890  4012FAB6397F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties ()
00:17:59.890  4012FAB6397F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272:
00:17:59.890   06:28:16	-- common/autotest_common.sh@653 -- # es=1
00:17:59.890   06:28:16	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:17:59.890   06:28:16	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:17:59.890   06:28:16	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:17:59.890   06:28:16	-- fips/fips.sh@130 -- # nvmftestinit
00:17:59.890   06:28:16	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:17:59.890   06:28:16	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:17:59.890   06:28:16	-- nvmf/common.sh@436 -- # prepare_net_devs
00:17:59.890   06:28:16	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:17:59.890   06:28:16	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:17:59.890   06:28:16	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:17:59.890   06:28:16	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:17:59.890    06:28:16	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:17:59.890   06:28:16	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:17:59.890   06:28:16	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:17:59.890   06:28:16	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:17:59.890   06:28:16	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:17:59.890   06:28:16	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:17:59.890   06:28:16	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:17:59.890   06:28:16	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:17:59.890   06:28:16	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:17:59.890   06:28:16	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:17:59.890   06:28:16	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:17:59.890   06:28:16	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:17:59.890   06:28:16	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:17:59.890   06:28:16	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:17:59.890   06:28:16	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:17:59.890   06:28:16	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:17:59.890   06:28:16	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:17:59.890   06:28:16	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:17:59.890   06:28:16	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:17:59.890   06:28:16	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:17:59.890   06:28:16	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:17:59.890  Cannot find device "nvmf_tgt_br"
00:17:59.890   06:28:16	-- nvmf/common.sh@154 -- # true
00:17:59.890   06:28:16	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:17:59.890  Cannot find device "nvmf_tgt_br2"
00:17:59.890   06:28:16	-- nvmf/common.sh@155 -- # true
00:17:59.890   06:28:16	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:17:59.890   06:28:16	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:17:59.890  Cannot find device "nvmf_tgt_br"
00:17:59.890   06:28:16	-- nvmf/common.sh@157 -- # true
00:17:59.890   06:28:16	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:17:59.890  Cannot find device "nvmf_tgt_br2"
00:17:59.890   06:28:16	-- nvmf/common.sh@158 -- # true
00:17:59.890   06:28:16	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:17:59.890   06:28:16	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:17:59.890   06:28:16	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:17:59.890  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:17:59.890   06:28:16	-- nvmf/common.sh@161 -- # true
00:17:59.890   06:28:16	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:17:59.890  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:17:59.890   06:28:16	-- nvmf/common.sh@162 -- # true
00:17:59.890   06:28:16	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:17:59.890   06:28:16	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:17:59.890   06:28:16	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:17:59.890   06:28:16	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:17:59.890   06:28:16	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:17:59.890   06:28:16	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:17:59.890   06:28:16	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:17:59.890   06:28:16	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:17:59.890   06:28:16	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:17:59.890   06:28:16	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:17:59.890   06:28:16	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:17:59.890   06:28:16	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:17:59.890   06:28:16	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:17:59.890   06:28:16	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:17:59.890   06:28:16	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:17:59.890   06:28:16	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:17:59.890   06:28:16	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:17:59.890   06:28:16	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:18:00.154   06:28:16	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:18:00.154   06:28:16	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:18:00.154   06:28:16	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:18:00.154   06:28:16	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:18:00.154   06:28:16	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:18:00.154   06:28:16	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:18:00.154  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:18:00.154  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms
00:18:00.154  
00:18:00.154  --- 10.0.0.2 ping statistics ---
00:18:00.154  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:18:00.154  rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms
00:18:00.154   06:28:16	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:18:00.154  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:18:00.154  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms
00:18:00.154  
00:18:00.154  --- 10.0.0.3 ping statistics ---
00:18:00.155  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:18:00.155  rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms
00:18:00.155   06:28:16	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:18:00.155  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:18:00.155  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms
00:18:00.155  
00:18:00.155  --- 10.0.0.1 ping statistics ---
00:18:00.155  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:18:00.155  rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms
00:18:00.155   06:28:16	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:18:00.155   06:28:16	-- nvmf/common.sh@421 -- # return 0
00:18:00.155   06:28:16	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:18:00.155   06:28:16	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:18:00.155   06:28:16	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:18:00.155   06:28:16	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:18:00.155   06:28:16	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:18:00.155   06:28:16	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:18:00.155   06:28:16	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:18:00.155   06:28:16	-- fips/fips.sh@131 -- # nvmfappstart -m 0x2
00:18:00.155   06:28:16	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:18:00.155   06:28:16	-- common/autotest_common.sh@722 -- # xtrace_disable
00:18:00.155   06:28:16	-- common/autotest_common.sh@10 -- # set +x
00:18:00.155   06:28:16	-- nvmf/common.sh@469 -- # nvmfpid=79249
00:18:00.155   06:28:16	-- nvmf/common.sh@470 -- # waitforlisten 79249
00:18:00.155  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:00.155   06:28:16	-- common/autotest_common.sh@829 -- # '[' -z 79249 ']'
00:18:00.155   06:28:16	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:00.155   06:28:16	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:18:00.155   06:28:16	-- common/autotest_common.sh@834 -- # local max_retries=100
00:18:00.155   06:28:16	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:00.155   06:28:16	-- common/autotest_common.sh@838 -- # xtrace_disable
00:18:00.155   06:28:16	-- common/autotest_common.sh@10 -- # set +x
00:18:00.155  [2024-12-16 06:28:17.017322] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:18:00.155  [2024-12-16 06:28:17.017575] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:18:00.413  [2024-12-16 06:28:17.154828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:00.413  [2024-12-16 06:28:17.235645] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:18:00.413  [2024-12-16 06:28:17.236120] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:18:00.413  [2024-12-16 06:28:17.236140] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:18:00.413  [2024-12-16 06:28:17.236149] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:18:00.413  [2024-12-16 06:28:17.236186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:18:00.980   06:28:17	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:18:00.980   06:28:17	-- common/autotest_common.sh@862 -- # return 0
00:18:00.980   06:28:17	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:18:00.980   06:28:17	-- common/autotest_common.sh@728 -- # xtrace_disable
00:18:00.980   06:28:17	-- common/autotest_common.sh@10 -- # set +x
00:18:00.980   06:28:17	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:18:00.980   06:28:17	-- fips/fips.sh@133 -- # trap cleanup EXIT
00:18:00.980   06:28:17	-- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ:
00:18:00.980   06:28:17	-- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt
00:18:00.980   06:28:17	-- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ:
00:18:00.980   06:28:17	-- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt
00:18:00.980   06:28:17	-- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt
00:18:00.980   06:28:17	-- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt
00:18:00.980   06:28:17	-- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:18:01.547  [2024-12-16 06:28:18.214628] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:18:01.547  [2024-12-16 06:28:18.230601] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:18:01.547  [2024-12-16 06:28:18.230829] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:18:01.547  malloc0
00:18:01.547   06:28:18	-- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:18:01.547   06:28:18	-- fips/fips.sh@147 -- # bdevperf_pid=79306
00:18:01.547   06:28:18	-- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:18:01.547   06:28:18	-- fips/fips.sh@148 -- # waitforlisten 79306 /var/tmp/bdevperf.sock
00:18:01.547   06:28:18	-- common/autotest_common.sh@829 -- # '[' -z 79306 ']'
00:18:01.547   06:28:18	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:18:01.547   06:28:18	-- common/autotest_common.sh@834 -- # local max_retries=100
00:18:01.547   06:28:18	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:18:01.547  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:18:01.547   06:28:18	-- common/autotest_common.sh@838 -- # xtrace_disable
00:18:01.547   06:28:18	-- common/autotest_common.sh@10 -- # set +x
00:18:01.547  [2024-12-16 06:28:18.371398] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:18:01.547  [2024-12-16 06:28:18.371700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79306 ]
00:18:01.547  [2024-12-16 06:28:18.510945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:01.805  [2024-12-16 06:28:18.595289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:18:02.372   06:28:19	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:18:02.372   06:28:19	-- common/autotest_common.sh@862 -- # return 0
00:18:02.372   06:28:19	-- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt
00:18:02.631  [2024-12-16 06:28:19.499920] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:18:02.631  TLSTESTn1
00:18:02.631   06:28:19	-- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:18:02.889  Running I/O for 10 seconds...
00:18:12.864  
00:18:12.864                                                                                                  Latency(us)
00:18:12.864  
[2024-12-16T06:28:29.840Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:12.864  
[2024-12-16T06:28:29.840Z]  Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:18:12.864  	 Verification LBA range: start 0x0 length 0x2000
00:18:12.864  	 TLSTESTn1           :      10.01    6499.27      25.39       0.00     0.00   19663.93    3991.74   24665.37
00:18:12.864  
[2024-12-16T06:28:29.840Z]  ===================================================================================================================
00:18:12.864  
[2024-12-16T06:28:29.840Z]  Total                       :               6499.27      25.39       0.00     0.00   19663.93    3991.74   24665.37
00:18:12.864  0
00:18:12.864   06:28:29	-- fips/fips.sh@1 -- # cleanup
00:18:12.864   06:28:29	-- fips/fips.sh@15 -- # process_shm --id 0
00:18:12.864   06:28:29	-- common/autotest_common.sh@806 -- # type=--id
00:18:12.864   06:28:29	-- common/autotest_common.sh@807 -- # id=0
00:18:12.864   06:28:29	-- common/autotest_common.sh@808 -- # '[' --id = --pid ']'
00:18:12.864    06:28:29	-- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n'
00:18:12.864   06:28:29	-- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0
00:18:12.864   06:28:29	-- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]]
00:18:12.864   06:28:29	-- common/autotest_common.sh@818 -- # for n in $shm_files
00:18:12.864   06:28:29	-- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0
00:18:12.864  nvmf_trace.0
00:18:12.864   06:28:29	-- common/autotest_common.sh@821 -- # return 0
00:18:12.864   06:28:29	-- fips/fips.sh@16 -- # killprocess 79306
00:18:12.864   06:28:29	-- common/autotest_common.sh@936 -- # '[' -z 79306 ']'
00:18:12.864   06:28:29	-- common/autotest_common.sh@940 -- # kill -0 79306
00:18:12.864    06:28:29	-- common/autotest_common.sh@941 -- # uname
00:18:12.864   06:28:29	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:18:12.864    06:28:29	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79306
00:18:12.864  killing process with pid 79306
00:18:12.864  Received shutdown signal, test time was about 10.000000 seconds
00:18:12.864  
00:18:12.864                                                                                                  Latency(us)
00:18:12.864  
[2024-12-16T06:28:29.840Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:12.864  
[2024-12-16T06:28:29.840Z]  ===================================================================================================================
00:18:12.864  
[2024-12-16T06:28:29.840Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:18:12.864   06:28:29	-- common/autotest_common.sh@942 -- # process_name=reactor_2
00:18:12.864   06:28:29	-- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']'
00:18:12.864   06:28:29	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 79306'
00:18:12.864   06:28:29	-- common/autotest_common.sh@955 -- # kill 79306
00:18:12.864   06:28:29	-- common/autotest_common.sh@960 -- # wait 79306
00:18:13.123   06:28:30	-- fips/fips.sh@17 -- # nvmftestfini
00:18:13.123   06:28:30	-- nvmf/common.sh@476 -- # nvmfcleanup
00:18:13.123   06:28:30	-- nvmf/common.sh@116 -- # sync
00:18:13.382   06:28:30	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:18:13.382   06:28:30	-- nvmf/common.sh@119 -- # set +e
00:18:13.382   06:28:30	-- nvmf/common.sh@120 -- # for i in {1..20}
00:18:13.382   06:28:30	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:18:13.382  rmmod nvme_tcp
00:18:13.382  rmmod nvme_fabrics
00:18:13.382  rmmod nvme_keyring
00:18:13.382   06:28:30	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:18:13.382   06:28:30	-- nvmf/common.sh@123 -- # set -e
00:18:13.382   06:28:30	-- nvmf/common.sh@124 -- # return 0
00:18:13.382   06:28:30	-- nvmf/common.sh@477 -- # '[' -n 79249 ']'
00:18:13.382   06:28:30	-- nvmf/common.sh@478 -- # killprocess 79249
00:18:13.382   06:28:30	-- common/autotest_common.sh@936 -- # '[' -z 79249 ']'
00:18:13.382   06:28:30	-- common/autotest_common.sh@940 -- # kill -0 79249
00:18:13.382    06:28:30	-- common/autotest_common.sh@941 -- # uname
00:18:13.382   06:28:30	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:18:13.382    06:28:30	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79249
00:18:13.382  killing process with pid 79249
00:18:13.382   06:28:30	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:18:13.382   06:28:30	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:18:13.382   06:28:30	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 79249'
00:18:13.382   06:28:30	-- common/autotest_common.sh@955 -- # kill 79249
00:18:13.382   06:28:30	-- common/autotest_common.sh@960 -- # wait 79249
00:18:13.640   06:28:30	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:18:13.640   06:28:30	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:18:13.640   06:28:30	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:18:13.640   06:28:30	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:18:13.640   06:28:30	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:18:13.640   06:28:30	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:18:13.640   06:28:30	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:18:13.640    06:28:30	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:18:13.640   06:28:30	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:18:13.640   06:28:30	-- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt
00:18:13.640  ************************************
00:18:13.640  END TEST nvmf_fips
00:18:13.640  ************************************
00:18:13.640  
00:18:13.640  real	0m14.341s
00:18:13.640  user	0m18.269s
00:18:13.640  sys	0m6.424s
00:18:13.640   06:28:30	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:18:13.640   06:28:30	-- common/autotest_common.sh@10 -- # set +x
00:18:13.640   06:28:30	-- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']'
00:18:13.640   06:28:30	-- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp
00:18:13.640   06:28:30	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:18:13.640   06:28:30	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:18:13.640   06:28:30	-- common/autotest_common.sh@10 -- # set +x
00:18:13.640  ************************************
00:18:13.640  START TEST nvmf_fuzz
00:18:13.640  ************************************
00:18:13.640   06:28:30	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp
00:18:13.900  * Looking for test storage...
00:18:13.900  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:18:13.900    06:28:30	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:18:13.900     06:28:30	-- common/autotest_common.sh@1690 -- # lcov --version
00:18:13.900     06:28:30	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:18:13.900    06:28:30	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:18:13.900    06:28:30	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:18:13.900    06:28:30	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:18:13.900    06:28:30	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:18:13.900    06:28:30	-- scripts/common.sh@335 -- # IFS=.-:
00:18:13.900    06:28:30	-- scripts/common.sh@335 -- # read -ra ver1
00:18:13.900    06:28:30	-- scripts/common.sh@336 -- # IFS=.-:
00:18:13.900    06:28:30	-- scripts/common.sh@336 -- # read -ra ver2
00:18:13.900    06:28:30	-- scripts/common.sh@337 -- # local 'op=<'
00:18:13.900    06:28:30	-- scripts/common.sh@339 -- # ver1_l=2
00:18:13.900    06:28:30	-- scripts/common.sh@340 -- # ver2_l=1
00:18:13.900    06:28:30	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:18:13.900    06:28:30	-- scripts/common.sh@343 -- # case "$op" in
00:18:13.900    06:28:30	-- scripts/common.sh@344 -- # : 1
00:18:13.900    06:28:30	-- scripts/common.sh@363 -- # (( v = 0 ))
00:18:13.900    06:28:30	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:18:13.900     06:28:30	-- scripts/common.sh@364 -- # decimal 1
00:18:13.900     06:28:30	-- scripts/common.sh@352 -- # local d=1
00:18:13.900     06:28:30	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:18:13.900     06:28:30	-- scripts/common.sh@354 -- # echo 1
00:18:13.900    06:28:30	-- scripts/common.sh@364 -- # ver1[v]=1
00:18:13.900     06:28:30	-- scripts/common.sh@365 -- # decimal 2
00:18:13.900     06:28:30	-- scripts/common.sh@352 -- # local d=2
00:18:13.900     06:28:30	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:18:13.900     06:28:30	-- scripts/common.sh@354 -- # echo 2
00:18:13.900    06:28:30	-- scripts/common.sh@365 -- # ver2[v]=2
00:18:13.900    06:28:30	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:18:13.900    06:28:30	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:18:13.900    06:28:30	-- scripts/common.sh@367 -- # return 0
00:18:13.900    06:28:30	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:18:13.900    06:28:30	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:18:13.900  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:13.900  		--rc genhtml_branch_coverage=1
00:18:13.900  		--rc genhtml_function_coverage=1
00:18:13.900  		--rc genhtml_legend=1
00:18:13.900  		--rc geninfo_all_blocks=1
00:18:13.900  		--rc geninfo_unexecuted_blocks=1
00:18:13.900  		
00:18:13.900  		'
00:18:13.900    06:28:30	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:18:13.900  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:13.900  		--rc genhtml_branch_coverage=1
00:18:13.900  		--rc genhtml_function_coverage=1
00:18:13.900  		--rc genhtml_legend=1
00:18:13.900  		--rc geninfo_all_blocks=1
00:18:13.900  		--rc geninfo_unexecuted_blocks=1
00:18:13.900  		
00:18:13.900  		'
00:18:13.900    06:28:30	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:18:13.900  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:13.900  		--rc genhtml_branch_coverage=1
00:18:13.900  		--rc genhtml_function_coverage=1
00:18:13.900  		--rc genhtml_legend=1
00:18:13.900  		--rc geninfo_all_blocks=1
00:18:13.900  		--rc geninfo_unexecuted_blocks=1
00:18:13.900  		
00:18:13.900  		'
00:18:13.900    06:28:30	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:18:13.900  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:13.900  		--rc genhtml_branch_coverage=1
00:18:13.900  		--rc genhtml_function_coverage=1
00:18:13.900  		--rc genhtml_legend=1
00:18:13.900  		--rc geninfo_all_blocks=1
00:18:13.900  		--rc geninfo_unexecuted_blocks=1
00:18:13.900  		
00:18:13.900  		'
00:18:13.900   06:28:30	-- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:18:13.900     06:28:30	-- nvmf/common.sh@7 -- # uname -s
00:18:13.900    06:28:30	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:18:13.900    06:28:30	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:18:13.900    06:28:30	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:18:13.900    06:28:30	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:18:13.900    06:28:30	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:18:13.900    06:28:30	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:18:13.900    06:28:30	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:18:13.900    06:28:30	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:18:13.900    06:28:30	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:18:13.900     06:28:30	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:18:13.900    06:28:30	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:18:13.900    06:28:30	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:18:13.900    06:28:30	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:18:13.900    06:28:30	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:18:13.900    06:28:30	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:18:13.900    06:28:30	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:18:13.900     06:28:30	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:18:13.900     06:28:30	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:18:13.900     06:28:30	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:18:13.900      06:28:30	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:13.900      06:28:30	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:13.900      06:28:30	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:13.900      06:28:30	-- paths/export.sh@5 -- # export PATH
00:18:13.900      06:28:30	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:13.900    06:28:30	-- nvmf/common.sh@46 -- # : 0
00:18:13.900    06:28:30	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:18:13.900    06:28:30	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:18:13.900    06:28:30	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:18:13.900    06:28:30	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:18:13.900    06:28:30	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:18:13.900    06:28:30	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:18:13.900    06:28:30	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:18:13.900    06:28:30	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:18:13.900   06:28:30	-- target/fabrics_fuzz.sh@11 -- # nvmftestinit
00:18:13.900   06:28:30	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:18:13.900   06:28:30	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:18:13.900   06:28:30	-- nvmf/common.sh@436 -- # prepare_net_devs
00:18:13.900   06:28:30	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:18:13.900   06:28:30	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:18:13.900   06:28:30	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:18:13.900   06:28:30	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:18:13.900    06:28:30	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:18:13.900   06:28:30	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:18:13.900   06:28:30	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:18:13.900   06:28:30	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:18:13.900   06:28:30	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:18:13.900   06:28:30	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:18:13.900   06:28:30	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:18:13.900   06:28:30	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:18:13.900   06:28:30	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:18:13.900   06:28:30	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:18:13.900   06:28:30	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:18:13.900   06:28:30	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:18:13.900   06:28:30	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:18:13.900   06:28:30	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:18:13.900   06:28:30	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:18:13.901   06:28:30	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:18:13.901   06:28:30	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:18:13.901   06:28:30	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:18:13.901   06:28:30	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:18:13.901   06:28:30	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:18:13.901   06:28:30	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:18:13.901  Cannot find device "nvmf_tgt_br"
00:18:13.901   06:28:30	-- nvmf/common.sh@154 -- # true
00:18:13.901   06:28:30	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:18:13.901  Cannot find device "nvmf_tgt_br2"
00:18:13.901   06:28:30	-- nvmf/common.sh@155 -- # true
00:18:13.901   06:28:30	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:18:13.901   06:28:30	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:18:14.159  Cannot find device "nvmf_tgt_br"
00:18:14.159   06:28:30	-- nvmf/common.sh@157 -- # true
00:18:14.159   06:28:30	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:18:14.159  Cannot find device "nvmf_tgt_br2"
00:18:14.159   06:28:30	-- nvmf/common.sh@158 -- # true
00:18:14.159   06:28:30	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:18:14.159   06:28:30	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:18:14.159   06:28:30	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:18:14.159  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:18:14.159   06:28:30	-- nvmf/common.sh@161 -- # true
00:18:14.159   06:28:30	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:18:14.159  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:18:14.159   06:28:30	-- nvmf/common.sh@162 -- # true
00:18:14.159   06:28:30	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:18:14.159   06:28:30	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:18:14.159   06:28:30	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:18:14.159   06:28:30	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:18:14.159   06:28:31	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:18:14.159   06:28:31	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:18:14.159   06:28:31	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:18:14.159   06:28:31	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:18:14.159   06:28:31	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:18:14.159   06:28:31	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:18:14.159   06:28:31	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:18:14.159   06:28:31	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:18:14.160   06:28:31	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:18:14.160   06:28:31	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:18:14.160   06:28:31	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:18:14.160   06:28:31	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:18:14.160   06:28:31	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:18:14.160   06:28:31	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:18:14.160   06:28:31	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:18:14.160   06:28:31	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:18:14.160   06:28:31	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:18:14.160   06:28:31	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:18:14.160   06:28:31	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:18:14.418   06:28:31	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:18:14.418  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:18:14.418  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms
00:18:14.418  
00:18:14.418  --- 10.0.0.2 ping statistics ---
00:18:14.418  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:18:14.418  rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms
00:18:14.418   06:28:31	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:18:14.418  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:18:14.418  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms
00:18:14.418  
00:18:14.418  --- 10.0.0.3 ping statistics ---
00:18:14.418  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:18:14.418  rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms
00:18:14.418   06:28:31	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:18:14.418  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:18:14.418  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms
00:18:14.418  
00:18:14.418  --- 10.0.0.1 ping statistics ---
00:18:14.418  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:18:14.418  rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms
00:18:14.418   06:28:31	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:18:14.418   06:28:31	-- nvmf/common.sh@421 -- # return 0
00:18:14.418   06:28:31	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:18:14.418   06:28:31	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:18:14.418   06:28:31	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:18:14.418   06:28:31	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:18:14.418   06:28:31	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:18:14.418   06:28:31	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:18:14.418   06:28:31	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:18:14.418   06:28:31	-- target/fabrics_fuzz.sh@14 -- # nvmfpid=79661
00:18:14.418   06:28:31	-- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1
00:18:14.418   06:28:31	-- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT
00:18:14.418   06:28:31	-- target/fabrics_fuzz.sh@18 -- # waitforlisten 79661
00:18:14.418   06:28:31	-- common/autotest_common.sh@829 -- # '[' -z 79661 ']'
00:18:14.418   06:28:31	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:14.418  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:14.418   06:28:31	-- common/autotest_common.sh@834 -- # local max_retries=100
00:18:14.418   06:28:31	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:14.418   06:28:31	-- common/autotest_common.sh@838 -- # xtrace_disable
00:18:14.418   06:28:31	-- common/autotest_common.sh@10 -- # set +x
00:18:15.355   06:28:32	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:18:15.355   06:28:32	-- common/autotest_common.sh@862 -- # return 0
00:18:15.355   06:28:32	-- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:18:15.355   06:28:32	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:15.355   06:28:32	-- common/autotest_common.sh@10 -- # set +x
00:18:15.355   06:28:32	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:15.355   06:28:32	-- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512
00:18:15.355   06:28:32	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:15.355   06:28:32	-- common/autotest_common.sh@10 -- # set +x
00:18:15.355  Malloc0
00:18:15.355   06:28:32	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:15.355   06:28:32	-- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:18:15.355   06:28:32	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:15.355   06:28:32	-- common/autotest_common.sh@10 -- # set +x
00:18:15.355   06:28:32	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:15.355   06:28:32	-- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:18:15.355   06:28:32	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:15.355   06:28:32	-- common/autotest_common.sh@10 -- # set +x
00:18:15.355   06:28:32	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:15.355   06:28:32	-- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:18:15.355   06:28:32	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:15.355   06:28:32	-- common/autotest_common.sh@10 -- # set +x
00:18:15.355   06:28:32	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:15.355   06:28:32	-- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420'
00:18:15.355   06:28:32	-- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a
00:18:15.921  Shutting down the fuzz application
00:18:15.921   06:28:32	-- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a
00:18:16.180  Shutting down the fuzz application
00:18:16.180   06:28:32	-- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:18:16.180   06:28:32	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:16.180   06:28:32	-- common/autotest_common.sh@10 -- # set +x
00:18:16.180   06:28:32	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:16.180   06:28:32	-- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT
00:18:16.180   06:28:32	-- target/fabrics_fuzz.sh@38 -- # nvmftestfini
00:18:16.180   06:28:32	-- nvmf/common.sh@476 -- # nvmfcleanup
00:18:16.180   06:28:32	-- nvmf/common.sh@116 -- # sync
00:18:16.180   06:28:33	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:18:16.180   06:28:33	-- nvmf/common.sh@119 -- # set +e
00:18:16.180   06:28:33	-- nvmf/common.sh@120 -- # for i in {1..20}
00:18:16.180   06:28:33	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:18:16.180  rmmod nvme_tcp
00:18:16.180  rmmod nvme_fabrics
00:18:16.180  rmmod nvme_keyring
00:18:16.180   06:28:33	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:18:16.180   06:28:33	-- nvmf/common.sh@123 -- # set -e
00:18:16.180   06:28:33	-- nvmf/common.sh@124 -- # return 0
00:18:16.180   06:28:33	-- nvmf/common.sh@477 -- # '[' -n 79661 ']'
00:18:16.180   06:28:33	-- nvmf/common.sh@478 -- # killprocess 79661
00:18:16.180   06:28:33	-- common/autotest_common.sh@936 -- # '[' -z 79661 ']'
00:18:16.180   06:28:33	-- common/autotest_common.sh@940 -- # kill -0 79661
00:18:16.180    06:28:33	-- common/autotest_common.sh@941 -- # uname
00:18:16.180   06:28:33	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:18:16.180    06:28:33	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79661
00:18:16.180  killing process with pid 79661
00:18:16.180   06:28:33	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:18:16.180   06:28:33	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:18:16.180   06:28:33	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 79661'
00:18:16.180   06:28:33	-- common/autotest_common.sh@955 -- # kill 79661
00:18:16.180   06:28:33	-- common/autotest_common.sh@960 -- # wait 79661
00:18:16.442   06:28:33	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:18:16.442   06:28:33	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:18:16.442   06:28:33	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:18:16.442   06:28:33	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:18:16.442   06:28:33	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:18:16.442   06:28:33	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:18:16.442   06:28:33	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:18:16.442    06:28:33	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:18:16.442   06:28:33	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:18:16.442   06:28:33	-- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt
00:18:16.442  ************************************
00:18:16.442  END TEST nvmf_fuzz
00:18:16.442  ************************************
00:18:16.442  
00:18:16.442  real	0m2.790s
00:18:16.442  user	0m2.865s
00:18:16.442  sys	0m0.641s
00:18:16.442   06:28:33	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:18:16.442   06:28:33	-- common/autotest_common.sh@10 -- # set +x
00:18:16.702   06:28:33	-- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp
00:18:16.702   06:28:33	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:18:16.702   06:28:33	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:18:16.702   06:28:33	-- common/autotest_common.sh@10 -- # set +x
00:18:16.702  ************************************
00:18:16.702  START TEST nvmf_multiconnection
00:18:16.702  ************************************
00:18:16.702   06:28:33	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp
00:18:16.702  * Looking for test storage...
00:18:16.702  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:18:16.702    06:28:33	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:18:16.702     06:28:33	-- common/autotest_common.sh@1690 -- # lcov --version
00:18:16.702     06:28:33	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:18:16.702    06:28:33	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:18:16.702    06:28:33	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:18:16.702    06:28:33	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:18:16.702    06:28:33	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:18:16.702    06:28:33	-- scripts/common.sh@335 -- # IFS=.-:
00:18:16.702    06:28:33	-- scripts/common.sh@335 -- # read -ra ver1
00:18:16.702    06:28:33	-- scripts/common.sh@336 -- # IFS=.-:
00:18:16.702    06:28:33	-- scripts/common.sh@336 -- # read -ra ver2
00:18:16.702    06:28:33	-- scripts/common.sh@337 -- # local 'op=<'
00:18:16.702    06:28:33	-- scripts/common.sh@339 -- # ver1_l=2
00:18:16.702    06:28:33	-- scripts/common.sh@340 -- # ver2_l=1
00:18:16.702    06:28:33	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:18:16.702    06:28:33	-- scripts/common.sh@343 -- # case "$op" in
00:18:16.702    06:28:33	-- scripts/common.sh@344 -- # : 1
00:18:16.702    06:28:33	-- scripts/common.sh@363 -- # (( v = 0 ))
00:18:16.702    06:28:33	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:18:16.702     06:28:33	-- scripts/common.sh@364 -- # decimal 1
00:18:16.702     06:28:33	-- scripts/common.sh@352 -- # local d=1
00:18:16.702     06:28:33	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:18:16.702     06:28:33	-- scripts/common.sh@354 -- # echo 1
00:18:16.702    06:28:33	-- scripts/common.sh@364 -- # ver1[v]=1
00:18:16.702     06:28:33	-- scripts/common.sh@365 -- # decimal 2
00:18:16.702     06:28:33	-- scripts/common.sh@352 -- # local d=2
00:18:16.702     06:28:33	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:18:16.702     06:28:33	-- scripts/common.sh@354 -- # echo 2
00:18:16.702    06:28:33	-- scripts/common.sh@365 -- # ver2[v]=2
00:18:16.702    06:28:33	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:18:16.702    06:28:33	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:18:16.702    06:28:33	-- scripts/common.sh@367 -- # return 0
00:18:16.702    06:28:33	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:18:16.702    06:28:33	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:18:16.702  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:16.702  		--rc genhtml_branch_coverage=1
00:18:16.702  		--rc genhtml_function_coverage=1
00:18:16.702  		--rc genhtml_legend=1
00:18:16.702  		--rc geninfo_all_blocks=1
00:18:16.702  		--rc geninfo_unexecuted_blocks=1
00:18:16.702  		
00:18:16.702  		'
00:18:16.702    06:28:33	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:18:16.702  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:16.702  		--rc genhtml_branch_coverage=1
00:18:16.702  		--rc genhtml_function_coverage=1
00:18:16.702  		--rc genhtml_legend=1
00:18:16.702  		--rc geninfo_all_blocks=1
00:18:16.702  		--rc geninfo_unexecuted_blocks=1
00:18:16.702  		
00:18:16.702  		'
00:18:16.702    06:28:33	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:18:16.702  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:16.702  		--rc genhtml_branch_coverage=1
00:18:16.702  		--rc genhtml_function_coverage=1
00:18:16.702  		--rc genhtml_legend=1
00:18:16.702  		--rc geninfo_all_blocks=1
00:18:16.702  		--rc geninfo_unexecuted_blocks=1
00:18:16.702  		
00:18:16.702  		'
00:18:16.702    06:28:33	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:18:16.702  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:16.702  		--rc genhtml_branch_coverage=1
00:18:16.702  		--rc genhtml_function_coverage=1
00:18:16.702  		--rc genhtml_legend=1
00:18:16.702  		--rc geninfo_all_blocks=1
00:18:16.702  		--rc geninfo_unexecuted_blocks=1
00:18:16.702  		
00:18:16.702  		'
00:18:16.702   06:28:33	-- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:18:16.702     06:28:33	-- nvmf/common.sh@7 -- # uname -s
00:18:16.702    06:28:33	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:18:16.702    06:28:33	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:18:16.702    06:28:33	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:18:16.702    06:28:33	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:18:16.702    06:28:33	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:18:16.702    06:28:33	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:18:16.702    06:28:33	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:18:16.702    06:28:33	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:18:16.702    06:28:33	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:18:16.702     06:28:33	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:18:16.702    06:28:33	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:18:16.702    06:28:33	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:18:16.702    06:28:33	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:18:16.702    06:28:33	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:18:16.702    06:28:33	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:18:16.702    06:28:33	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:18:16.702     06:28:33	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:18:16.702     06:28:33	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:18:16.702     06:28:33	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:18:16.702      06:28:33	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:16.702      06:28:33	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:16.702      06:28:33	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:16.702      06:28:33	-- paths/export.sh@5 -- # export PATH
00:18:16.702      06:28:33	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:16.702    06:28:33	-- nvmf/common.sh@46 -- # : 0
00:18:16.702    06:28:33	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:18:16.702    06:28:33	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:18:16.702    06:28:33	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:18:16.702    06:28:33	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:18:16.702    06:28:33	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:18:16.702    06:28:33	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:18:16.702    06:28:33	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:18:16.702    06:28:33	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:18:16.702   06:28:33	-- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64
00:18:16.702   06:28:33	-- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:18:16.702   06:28:33	-- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11
00:18:16.702   06:28:33	-- target/multiconnection.sh@16 -- # nvmftestinit
00:18:16.702   06:28:33	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:18:16.703   06:28:33	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:18:16.703   06:28:33	-- nvmf/common.sh@436 -- # prepare_net_devs
00:18:16.703   06:28:33	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:18:16.703   06:28:33	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:18:16.703   06:28:33	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:18:16.703   06:28:33	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:18:16.703    06:28:33	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:18:16.703   06:28:33	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:18:16.703   06:28:33	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:18:16.703   06:28:33	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:18:16.703   06:28:33	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:18:16.703   06:28:33	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:18:16.703   06:28:33	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:18:16.703   06:28:33	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:18:16.703   06:28:33	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:18:16.703   06:28:33	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:18:16.703   06:28:33	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:18:16.703   06:28:33	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:18:16.703   06:28:33	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:18:16.703   06:28:33	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:18:16.703   06:28:33	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:18:16.703   06:28:33	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:18:16.703   06:28:33	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:18:16.703   06:28:33	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:18:16.703   06:28:33	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:18:16.703   06:28:33	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:18:16.703   06:28:33	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:18:16.961  Cannot find device "nvmf_tgt_br"
00:18:16.961   06:28:33	-- nvmf/common.sh@154 -- # true
00:18:16.961   06:28:33	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:18:16.961  Cannot find device "nvmf_tgt_br2"
00:18:16.961   06:28:33	-- nvmf/common.sh@155 -- # true
00:18:16.961   06:28:33	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:18:16.961   06:28:33	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:18:16.961  Cannot find device "nvmf_tgt_br"
00:18:16.961   06:28:33	-- nvmf/common.sh@157 -- # true
00:18:16.961   06:28:33	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:18:16.961  Cannot find device "nvmf_tgt_br2"
00:18:16.961   06:28:33	-- nvmf/common.sh@158 -- # true
00:18:16.961   06:28:33	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:18:16.961   06:28:33	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:18:16.961   06:28:33	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:18:16.961  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:18:16.961   06:28:33	-- nvmf/common.sh@161 -- # true
00:18:16.961   06:28:33	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:18:16.961  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:18:16.961   06:28:33	-- nvmf/common.sh@162 -- # true
00:18:16.961   06:28:33	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:18:16.961   06:28:33	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:18:16.961   06:28:33	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:18:16.961   06:28:33	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:18:16.961   06:28:33	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:18:16.961   06:28:33	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:18:16.961   06:28:33	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:18:16.961   06:28:33	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:18:16.961   06:28:33	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:18:16.961   06:28:33	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:18:16.961   06:28:33	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:18:16.961   06:28:33	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:18:16.961   06:28:33	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:18:16.961   06:28:33	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:18:16.961   06:28:33	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:18:16.961   06:28:33	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:18:16.961   06:28:33	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:18:16.961   06:28:33	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:18:17.220   06:28:33	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:18:17.220   06:28:33	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:18:17.220   06:28:33	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:18:17.220   06:28:33	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:18:17.220   06:28:33	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:18:17.220   06:28:33	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:18:17.220  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:18:17.220  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms
00:18:17.220  
00:18:17.220  --- 10.0.0.2 ping statistics ---
00:18:17.220  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:18:17.220  rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms
00:18:17.220   06:28:33	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:18:17.220  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:18:17.220  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms
00:18:17.220  
00:18:17.220  --- 10.0.0.3 ping statistics ---
00:18:17.220  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:18:17.220  rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms
00:18:17.220   06:28:33	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:18:17.220  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:18:17.220  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms
00:18:17.220  
00:18:17.220  --- 10.0.0.1 ping statistics ---
00:18:17.220  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:18:17.220  rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms
00:18:17.220   06:28:33	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:18:17.220   06:28:33	-- nvmf/common.sh@421 -- # return 0
00:18:17.220   06:28:33	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:18:17.220   06:28:33	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:18:17.220   06:28:33	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:18:17.220   06:28:33	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:18:17.220   06:28:33	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:18:17.220   06:28:34	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:18:17.220   06:28:34	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:18:17.220   06:28:34	-- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF
00:18:17.220   06:28:34	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:18:17.220   06:28:34	-- common/autotest_common.sh@722 -- # xtrace_disable
00:18:17.220   06:28:34	-- common/autotest_common.sh@10 -- # set +x
00:18:17.220   06:28:34	-- nvmf/common.sh@469 -- # nvmfpid=79872
00:18:17.220   06:28:34	-- nvmf/common.sh@470 -- # waitforlisten 79872
00:18:17.221   06:28:34	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:18:17.221  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:17.221   06:28:34	-- common/autotest_common.sh@829 -- # '[' -z 79872 ']'
00:18:17.221   06:28:34	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:17.221   06:28:34	-- common/autotest_common.sh@834 -- # local max_retries=100
00:18:17.221   06:28:34	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:17.221   06:28:34	-- common/autotest_common.sh@838 -- # xtrace_disable
00:18:17.221   06:28:34	-- common/autotest_common.sh@10 -- # set +x
00:18:17.221  [2024-12-16 06:28:34.091954] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:18:17.221  [2024-12-16 06:28:34.092052] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:18:17.479  [2024-12-16 06:28:34.228937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:18:17.479  [2024-12-16 06:28:34.305569] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:18:17.479  [2024-12-16 06:28:34.305986] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:18:17.479  [2024-12-16 06:28:34.306039] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:18:17.479  [2024-12-16 06:28:34.306221] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:18:17.479  [2024-12-16 06:28:34.306601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:18:17.479  [2024-12-16 06:28:34.306660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:18:17.479  [2024-12-16 06:28:34.306727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:18:17.479  [2024-12-16 06:28:34.306726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:18:18.414   06:28:35	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:18:18.414   06:28:35	-- common/autotest_common.sh@862 -- # return 0
00:18:18.414   06:28:35	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:18:18.414   06:28:35	-- common/autotest_common.sh@728 -- # xtrace_disable
00:18:18.414   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.414   06:28:35	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:18:18.414   06:28:35	-- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:18:18.414   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.414   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.414  [2024-12-16 06:28:35.152183] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:18:18.414   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.414    06:28:35	-- target/multiconnection.sh@21 -- # seq 1 11
00:18:18.414   06:28:35	-- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:18:18.414   06:28:35	-- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1
00:18:18.414   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.414   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.414  Malloc1
00:18:18.414   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.414   06:28:35	-- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1
00:18:18.414   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.414   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.414   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.414   06:28:35	-- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:18:18.414   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.414   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.414   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.414   06:28:35	-- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:18:18.414   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.414   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.414  [2024-12-16 06:28:35.237916] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:18:18.414   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.414   06:28:35	-- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:18:18.414   06:28:35	-- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2
00:18:18.414   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.414   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.414  Malloc2
00:18:18.414   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.414   06:28:35	-- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2
00:18:18.414   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.414   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.414   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.414   06:28:35	-- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2
00:18:18.414   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.414   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.414   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.414   06:28:35	-- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420
00:18:18.414   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.414   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.414   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.414   06:28:35	-- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:18:18.414   06:28:35	-- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3
00:18:18.414   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.414   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.414  Malloc3
00:18:18.414   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.414   06:28:35	-- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3
00:18:18.414   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.414   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.414   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.414   06:28:35	-- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3
00:18:18.414   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.414   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.415   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.415   06:28:35	-- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420
00:18:18.415   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.415   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.415   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.415   06:28:35	-- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:18:18.415   06:28:35	-- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4
00:18:18.415   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.415   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.673  Malloc4
00:18:18.673   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.673   06:28:35	-- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4
00:18:18.673   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.673   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.673   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.673   06:28:35	-- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4
00:18:18.673   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.673   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.673   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.673   06:28:35	-- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420
00:18:18.673   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.673   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.673   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.673   06:28:35	-- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:18:18.673   06:28:35	-- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5
00:18:18.673   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.673   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.673  Malloc5
00:18:18.673   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.673   06:28:35	-- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5
00:18:18.673   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.673   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.673   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.673   06:28:35	-- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5
00:18:18.673   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.673   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.673   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.673   06:28:35	-- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420
00:18:18.673   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.673   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.673   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.673   06:28:35	-- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:18:18.673   06:28:35	-- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6
00:18:18.673   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.673   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.673  Malloc6
00:18:18.673   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.673   06:28:35	-- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6
00:18:18.673   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.673   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.673   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.673   06:28:35	-- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6
00:18:18.673   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.673   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.673   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.673   06:28:35	-- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420
00:18:18.673   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.673   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.673   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.673   06:28:35	-- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:18:18.673   06:28:35	-- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7
00:18:18.673   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.673   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.673  Malloc7
00:18:18.673   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.673   06:28:35	-- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7
00:18:18.673   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.673   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.673   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.673   06:28:35	-- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7
00:18:18.673   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.673   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.673   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.673   06:28:35	-- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420
00:18:18.673   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.673   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.673   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.673   06:28:35	-- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:18:18.673   06:28:35	-- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8
00:18:18.673   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.673   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.673  Malloc8
00:18:18.673   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.673   06:28:35	-- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8
00:18:18.673   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.673   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.673   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.673   06:28:35	-- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8
00:18:18.673   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.673   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.932   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.932   06:28:35	-- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420
00:18:18.932   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.932   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.932   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.932   06:28:35	-- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:18:18.932   06:28:35	-- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9
00:18:18.932   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.932   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.932  Malloc9
00:18:18.932   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.932   06:28:35	-- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9
00:18:18.932   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.932   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.932   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.932   06:28:35	-- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9
00:18:18.932   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.932   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.932   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.932   06:28:35	-- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420
00:18:18.932   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.932   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.932   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.932   06:28:35	-- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:18:18.932   06:28:35	-- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10
00:18:18.932   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.932   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.932  Malloc10
00:18:18.932   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.932   06:28:35	-- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10
00:18:18.932   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.932   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.932   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.932   06:28:35	-- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10
00:18:18.932   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.932   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.932   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.932   06:28:35	-- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420
00:18:18.932   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.932   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.932   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.932   06:28:35	-- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:18:18.932   06:28:35	-- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11
00:18:18.932   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.932   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.932  Malloc11
00:18:18.932   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.932   06:28:35	-- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11
00:18:18.932   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.932   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.932   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.932   06:28:35	-- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11
00:18:18.932   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.932   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.932   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.932   06:28:35	-- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420
00:18:18.932   06:28:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.932   06:28:35	-- common/autotest_common.sh@10 -- # set +x
00:18:18.932   06:28:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.932    06:28:35	-- target/multiconnection.sh@28 -- # seq 1 11
00:18:18.932   06:28:35	-- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:18:18.932   06:28:35	-- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:18:19.191   06:28:36	-- target/multiconnection.sh@30 -- # waitforserial SPDK1
00:18:19.191   06:28:36	-- common/autotest_common.sh@1187 -- # local i=0
00:18:19.191   06:28:36	-- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0
00:18:19.191   06:28:36	-- common/autotest_common.sh@1189 -- # [[ -n '' ]]
00:18:19.191   06:28:36	-- common/autotest_common.sh@1194 -- # sleep 2
00:18:21.095   06:28:38	-- common/autotest_common.sh@1195 -- # (( i++ <= 15 ))
00:18:21.095    06:28:38	-- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL
00:18:21.095    06:28:38	-- common/autotest_common.sh@1196 -- # grep -c SPDK1
00:18:21.095   06:28:38	-- common/autotest_common.sh@1196 -- # nvme_devices=1
00:18:21.095   06:28:38	-- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter ))
00:18:21.095   06:28:38	-- common/autotest_common.sh@1197 -- # return 0
00:18:21.095   06:28:38	-- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:18:21.095   06:28:38	-- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420
00:18:21.353   06:28:38	-- target/multiconnection.sh@30 -- # waitforserial SPDK2
00:18:21.353   06:28:38	-- common/autotest_common.sh@1187 -- # local i=0
00:18:21.353   06:28:38	-- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0
00:18:21.353   06:28:38	-- common/autotest_common.sh@1189 -- # [[ -n '' ]]
00:18:21.353   06:28:38	-- common/autotest_common.sh@1194 -- # sleep 2
00:18:23.257   06:28:40	-- common/autotest_common.sh@1195 -- # (( i++ <= 15 ))
00:18:23.257    06:28:40	-- common/autotest_common.sh@1196 -- # grep -c SPDK2
00:18:23.257    06:28:40	-- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL
00:18:23.515   06:28:40	-- common/autotest_common.sh@1196 -- # nvme_devices=1
00:18:23.515   06:28:40	-- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter ))
00:18:23.515   06:28:40	-- common/autotest_common.sh@1197 -- # return 0
00:18:23.515   06:28:40	-- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:18:23.515   06:28:40	-- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420
00:18:23.515   06:28:40	-- target/multiconnection.sh@30 -- # waitforserial SPDK3
00:18:23.515   06:28:40	-- common/autotest_common.sh@1187 -- # local i=0
00:18:23.515   06:28:40	-- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0
00:18:23.515   06:28:40	-- common/autotest_common.sh@1189 -- # [[ -n '' ]]
00:18:23.515   06:28:40	-- common/autotest_common.sh@1194 -- # sleep 2
00:18:26.048   06:28:42	-- common/autotest_common.sh@1195 -- # (( i++ <= 15 ))
00:18:26.048    06:28:42	-- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL
00:18:26.048    06:28:42	-- common/autotest_common.sh@1196 -- # grep -c SPDK3
00:18:26.048   06:28:42	-- common/autotest_common.sh@1196 -- # nvme_devices=1
00:18:26.048   06:28:42	-- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter ))
00:18:26.048   06:28:42	-- common/autotest_common.sh@1197 -- # return 0
00:18:26.048   06:28:42	-- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:18:26.048   06:28:42	-- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420
00:18:26.048   06:28:42	-- target/multiconnection.sh@30 -- # waitforserial SPDK4
00:18:26.048   06:28:42	-- common/autotest_common.sh@1187 -- # local i=0
00:18:26.048   06:28:42	-- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0
00:18:26.048   06:28:42	-- common/autotest_common.sh@1189 -- # [[ -n '' ]]
00:18:26.048   06:28:42	-- common/autotest_common.sh@1194 -- # sleep 2
00:18:27.951   06:28:44	-- common/autotest_common.sh@1195 -- # (( i++ <= 15 ))
00:18:27.951    06:28:44	-- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL
00:18:27.951    06:28:44	-- common/autotest_common.sh@1196 -- # grep -c SPDK4
00:18:27.951   06:28:44	-- common/autotest_common.sh@1196 -- # nvme_devices=1
00:18:27.951   06:28:44	-- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter ))
00:18:27.951   06:28:44	-- common/autotest_common.sh@1197 -- # return 0
00:18:27.951   06:28:44	-- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:18:27.951   06:28:44	-- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420
00:18:27.951   06:28:44	-- target/multiconnection.sh@30 -- # waitforserial SPDK5
00:18:27.951   06:28:44	-- common/autotest_common.sh@1187 -- # local i=0
00:18:27.951   06:28:44	-- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0
00:18:27.951   06:28:44	-- common/autotest_common.sh@1189 -- # [[ -n '' ]]
00:18:27.951   06:28:44	-- common/autotest_common.sh@1194 -- # sleep 2
00:18:29.884   06:28:46	-- common/autotest_common.sh@1195 -- # (( i++ <= 15 ))
00:18:29.884    06:28:46	-- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL
00:18:29.884    06:28:46	-- common/autotest_common.sh@1196 -- # grep -c SPDK5
00:18:29.884   06:28:46	-- common/autotest_common.sh@1196 -- # nvme_devices=1
00:18:29.884   06:28:46	-- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter ))
00:18:29.884   06:28:46	-- common/autotest_common.sh@1197 -- # return 0
00:18:29.884   06:28:46	-- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:18:29.884   06:28:46	-- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420
00:18:30.143   06:28:47	-- target/multiconnection.sh@30 -- # waitforserial SPDK6
00:18:30.143   06:28:47	-- common/autotest_common.sh@1187 -- # local i=0
00:18:30.143   06:28:47	-- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0
00:18:30.143   06:28:47	-- common/autotest_common.sh@1189 -- # [[ -n '' ]]
00:18:30.143   06:28:47	-- common/autotest_common.sh@1194 -- # sleep 2
00:18:32.049   06:28:49	-- common/autotest_common.sh@1195 -- # (( i++ <= 15 ))
00:18:32.049    06:28:49	-- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL
00:18:32.049    06:28:49	-- common/autotest_common.sh@1196 -- # grep -c SPDK6
00:18:32.307   06:28:49	-- common/autotest_common.sh@1196 -- # nvme_devices=1
00:18:32.308   06:28:49	-- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter ))
00:18:32.308   06:28:49	-- common/autotest_common.sh@1197 -- # return 0
00:18:32.308   06:28:49	-- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:18:32.308   06:28:49	-- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420
00:18:32.308   06:28:49	-- target/multiconnection.sh@30 -- # waitforserial SPDK7
00:18:32.308   06:28:49	-- common/autotest_common.sh@1187 -- # local i=0
00:18:32.308   06:28:49	-- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0
00:18:32.308   06:28:49	-- common/autotest_common.sh@1189 -- # [[ -n '' ]]
00:18:32.308   06:28:49	-- common/autotest_common.sh@1194 -- # sleep 2
00:18:34.843   06:28:51	-- common/autotest_common.sh@1195 -- # (( i++ <= 15 ))
00:18:34.843    06:28:51	-- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL
00:18:34.843    06:28:51	-- common/autotest_common.sh@1196 -- # grep -c SPDK7
00:18:34.843   06:28:51	-- common/autotest_common.sh@1196 -- # nvme_devices=1
00:18:34.843   06:28:51	-- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter ))
00:18:34.843   06:28:51	-- common/autotest_common.sh@1197 -- # return 0
00:18:34.843   06:28:51	-- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:18:34.843   06:28:51	-- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420
00:18:34.843   06:28:51	-- target/multiconnection.sh@30 -- # waitforserial SPDK8
00:18:34.843   06:28:51	-- common/autotest_common.sh@1187 -- # local i=0
00:18:34.843   06:28:51	-- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0
00:18:34.843   06:28:51	-- common/autotest_common.sh@1189 -- # [[ -n '' ]]
00:18:34.843   06:28:51	-- common/autotest_common.sh@1194 -- # sleep 2
00:18:36.748   06:28:53	-- common/autotest_common.sh@1195 -- # (( i++ <= 15 ))
00:18:36.748    06:28:53	-- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL
00:18:36.748    06:28:53	-- common/autotest_common.sh@1196 -- # grep -c SPDK8
00:18:36.748   06:28:53	-- common/autotest_common.sh@1196 -- # nvme_devices=1
00:18:36.748   06:28:53	-- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter ))
00:18:36.748   06:28:53	-- common/autotest_common.sh@1197 -- # return 0
00:18:36.748   06:28:53	-- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:18:36.748   06:28:53	-- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420
00:18:36.748   06:28:53	-- target/multiconnection.sh@30 -- # waitforserial SPDK9
00:18:36.748   06:28:53	-- common/autotest_common.sh@1187 -- # local i=0
00:18:36.748   06:28:53	-- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0
00:18:36.748   06:28:53	-- common/autotest_common.sh@1189 -- # [[ -n '' ]]
00:18:36.748   06:28:53	-- common/autotest_common.sh@1194 -- # sleep 2
00:18:39.283   06:28:55	-- common/autotest_common.sh@1195 -- # (( i++ <= 15 ))
00:18:39.283    06:28:55	-- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL
00:18:39.283    06:28:55	-- common/autotest_common.sh@1196 -- # grep -c SPDK9
00:18:39.283   06:28:55	-- common/autotest_common.sh@1196 -- # nvme_devices=1
00:18:39.283   06:28:55	-- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter ))
00:18:39.283   06:28:55	-- common/autotest_common.sh@1197 -- # return 0
00:18:39.283   06:28:55	-- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:18:39.283   06:28:55	-- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420
00:18:39.283   06:28:55	-- target/multiconnection.sh@30 -- # waitforserial SPDK10
00:18:39.283   06:28:55	-- common/autotest_common.sh@1187 -- # local i=0
00:18:39.283   06:28:55	-- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0
00:18:39.283   06:28:55	-- common/autotest_common.sh@1189 -- # [[ -n '' ]]
00:18:39.283   06:28:55	-- common/autotest_common.sh@1194 -- # sleep 2
00:18:41.188   06:28:57	-- common/autotest_common.sh@1195 -- # (( i++ <= 15 ))
00:18:41.188    06:28:57	-- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL
00:18:41.188    06:28:57	-- common/autotest_common.sh@1196 -- # grep -c SPDK10
00:18:41.188   06:28:57	-- common/autotest_common.sh@1196 -- # nvme_devices=1
00:18:41.188   06:28:57	-- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter ))
00:18:41.188   06:28:57	-- common/autotest_common.sh@1197 -- # return 0
00:18:41.188   06:28:57	-- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:18:41.188   06:28:57	-- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420
00:18:41.188   06:28:58	-- target/multiconnection.sh@30 -- # waitforserial SPDK11
00:18:41.188   06:28:58	-- common/autotest_common.sh@1187 -- # local i=0
00:18:41.188   06:28:58	-- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0
00:18:41.188   06:28:58	-- common/autotest_common.sh@1189 -- # [[ -n '' ]]
00:18:41.188   06:28:58	-- common/autotest_common.sh@1194 -- # sleep 2
00:18:43.094   06:29:00	-- common/autotest_common.sh@1195 -- # (( i++ <= 15 ))
00:18:43.094    06:29:00	-- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL
00:18:43.094    06:29:00	-- common/autotest_common.sh@1196 -- # grep -c SPDK11
00:18:43.352   06:29:00	-- common/autotest_common.sh@1196 -- # nvme_devices=1
00:18:43.352   06:29:00	-- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter ))
00:18:43.352   06:29:00	-- common/autotest_common.sh@1197 -- # return 0
00:18:43.352   06:29:00	-- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10
00:18:43.352  [global]
00:18:43.353  thread=1
00:18:43.353  invalidate=1
00:18:43.353  rw=read
00:18:43.353  time_based=1
00:18:43.353  runtime=10
00:18:43.353  ioengine=libaio
00:18:43.353  direct=1
00:18:43.353  bs=262144
00:18:43.353  iodepth=64
00:18:43.353  norandommap=1
00:18:43.353  numjobs=1
00:18:43.353  
00:18:43.353  [job0]
00:18:43.353  filename=/dev/nvme0n1
00:18:43.353  [job1]
00:18:43.353  filename=/dev/nvme10n1
00:18:43.353  [job2]
00:18:43.353  filename=/dev/nvme1n1
00:18:43.353  [job3]
00:18:43.353  filename=/dev/nvme2n1
00:18:43.353  [job4]
00:18:43.353  filename=/dev/nvme3n1
00:18:43.353  [job5]
00:18:43.353  filename=/dev/nvme4n1
00:18:43.353  [job6]
00:18:43.353  filename=/dev/nvme5n1
00:18:43.353  [job7]
00:18:43.353  filename=/dev/nvme6n1
00:18:43.353  [job8]
00:18:43.353  filename=/dev/nvme7n1
00:18:43.353  [job9]
00:18:43.353  filename=/dev/nvme8n1
00:18:43.353  [job10]
00:18:43.353  filename=/dev/nvme9n1
00:18:43.353  Could not set queue depth (nvme0n1)
00:18:43.353  Could not set queue depth (nvme10n1)
00:18:43.353  Could not set queue depth (nvme1n1)
00:18:43.353  Could not set queue depth (nvme2n1)
00:18:43.353  Could not set queue depth (nvme3n1)
00:18:43.353  Could not set queue depth (nvme4n1)
00:18:43.353  Could not set queue depth (nvme5n1)
00:18:43.353  Could not set queue depth (nvme6n1)
00:18:43.353  Could not set queue depth (nvme7n1)
00:18:43.353  Could not set queue depth (nvme8n1)
00:18:43.353  Could not set queue depth (nvme9n1)
00:18:43.611  job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:18:43.612  job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:18:43.612  job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:18:43.612  job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:18:43.612  job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:18:43.612  job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:18:43.612  job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:18:43.612  job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:18:43.612  job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:18:43.612  job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:18:43.612  job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:18:43.612  fio-3.35
00:18:43.612  Starting 11 threads
00:18:55.823  
00:18:55.824  job0: (groupid=0, jobs=1): err= 0: pid=80355: Mon Dec 16 06:29:10 2024
00:18:55.824    read: IOPS=484, BW=121MiB/s (127MB/s)(1221MiB/10083msec)
00:18:55.824      slat (usec): min=19, max=140431, avg=1949.07, stdev=7771.02
00:18:55.824      clat (msec): min=70, max=369, avg=130.00, stdev=33.53
00:18:55.824       lat (msec): min=70, max=369, avg=131.95, stdev=34.74
00:18:55.824      clat percentiles (msec):
00:18:55.824       |  1.00th=[   83],  5.00th=[   92], 10.00th=[   97], 20.00th=[  105],
00:18:55.824       | 30.00th=[  110], 40.00th=[  115], 50.00th=[  121], 60.00th=[  129],
00:18:55.824       | 70.00th=[  142], 80.00th=[  153], 90.00th=[  169], 95.00th=[  209],
00:18:55.824       | 99.00th=[  232], 99.50th=[  245], 99.90th=[  262], 99.95th=[  262],
00:18:55.824       | 99.99th=[  372]
00:18:55.824     bw (  KiB/s): min=64128, max=163001, per=7.00%, avg=123442.90, stdev=29732.51, samples=20
00:18:55.824     iops        : min=  250, max=  636, avg=482.00, stdev=116.18, samples=20
00:18:55.824    lat (msec)   : 100=13.72%, 250=85.92%, 500=0.37%
00:18:55.824    cpu          : usr=0.21%, sys=1.66%, ctx=965, majf=0, minf=4097
00:18:55.824    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7%
00:18:55.824       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:18:55.824       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:18:55.824       issued rwts: total=4885,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:18:55.824       latency   : target=0, window=0, percentile=100.00%, depth=64
00:18:55.824  job1: (groupid=0, jobs=1): err= 0: pid=80356: Mon Dec 16 06:29:10 2024
00:18:55.824    read: IOPS=858, BW=215MiB/s (225MB/s)(2165MiB/10087msec)
00:18:55.824      slat (usec): min=20, max=54003, avg=1150.33, stdev=4297.64
00:18:55.824      clat (msec): min=18, max=171, avg=73.27, stdev=38.70
00:18:55.824       lat (msec): min=18, max=171, avg=74.42, stdev=39.38
00:18:55.824      clat percentiles (msec):
00:18:55.824       |  1.00th=[   21],  5.00th=[   26], 10.00th=[   31], 20.00th=[   35],
00:18:55.824       | 30.00th=[   39], 40.00th=[   43], 50.00th=[   61], 60.00th=[  100],
00:18:55.824       | 70.00th=[  108], 80.00th=[  114], 90.00th=[  122], 95.00th=[  128],
00:18:55.824       | 99.00th=[  142], 99.50th=[  148], 99.90th=[  163], 99.95th=[  165],
00:18:55.824       | 99.99th=[  171]
00:18:55.824     bw (  KiB/s): min=132096, max=469504, per=12.48%, avg=220003.35, stdev=126742.69, samples=20
00:18:55.824     iops        : min=  516, max= 1834, avg=859.25, stdev=495.18, samples=20
00:18:55.824    lat (msec)   : 20=0.37%, 50=47.09%, 100=13.63%, 250=38.91%
00:18:55.824    cpu          : usr=0.41%, sys=2.59%, ctx=1718, majf=0, minf=4097
00:18:55.824    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3%
00:18:55.824       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:18:55.824       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:18:55.824       issued rwts: total=8658,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:18:55.824       latency   : target=0, window=0, percentile=100.00%, depth=64
00:18:55.824  job2: (groupid=0, jobs=1): err= 0: pid=80357: Mon Dec 16 06:29:10 2024
00:18:55.824    read: IOPS=762, BW=191MiB/s (200MB/s)(1911MiB/10029msec)
00:18:55.824      slat (usec): min=17, max=47166, avg=1253.66, stdev=4515.79
00:18:55.824      clat (msec): min=16, max=180, avg=82.56, stdev=38.69
00:18:55.824       lat (msec): min=17, max=191, avg=83.81, stdev=39.37
00:18:55.824      clat percentiles (msec):
00:18:55.824       |  1.00th=[   21],  5.00th=[   26], 10.00th=[   30], 20.00th=[   38],
00:18:55.824       | 30.00th=[   58], 40.00th=[   71], 50.00th=[   84], 60.00th=[  100],
00:18:55.824       | 70.00th=[  109], 80.00th=[  118], 90.00th=[  136], 95.00th=[  144],
00:18:55.824       | 99.00th=[  157], 99.50th=[  161], 99.90th=[  176], 99.95th=[  176],
00:18:55.824       | 99.99th=[  182]
00:18:55.824     bw (  KiB/s): min=108544, max=497152, per=11.01%, avg=194059.90, stdev=103511.82, samples=20
00:18:55.824     iops        : min=  424, max= 1942, avg=757.90, stdev=404.37, samples=20
00:18:55.824    lat (msec)   : 20=0.61%, 50=27.63%, 100=32.41%, 250=39.35%
00:18:55.824    cpu          : usr=0.28%, sys=2.47%, ctx=1525, majf=0, minf=4097
00:18:55.824    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2%
00:18:55.824       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:18:55.824       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:18:55.824       issued rwts: total=7645,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:18:55.824       latency   : target=0, window=0, percentile=100.00%, depth=64
00:18:55.824  job3: (groupid=0, jobs=1): err= 0: pid=80358: Mon Dec 16 06:29:10 2024
00:18:55.824    read: IOPS=739, BW=185MiB/s (194MB/s)(1868MiB/10105msec)
00:18:55.824      slat (usec): min=17, max=67064, avg=1270.00, stdev=4746.69
00:18:55.824      clat (msec): min=16, max=240, avg=85.17, stdev=41.92
00:18:55.824       lat (msec): min=16, max=240, avg=86.44, stdev=42.63
00:18:55.824      clat percentiles (msec):
00:18:55.824       |  1.00th=[   21],  5.00th=[   26], 10.00th=[   31], 20.00th=[   40],
00:18:55.824       | 30.00th=[   57], 40.00th=[   70], 50.00th=[   83], 60.00th=[  101],
00:18:55.824       | 70.00th=[  111], 80.00th=[  126], 90.00th=[  142], 95.00th=[  150],
00:18:55.824       | 99.00th=[  169], 99.50th=[  194], 99.90th=[  220], 99.95th=[  241],
00:18:55.824       | 99.99th=[  241]
00:18:55.824     bw (  KiB/s): min=108032, max=455567, per=10.76%, avg=189656.15, stdev=103353.67, samples=20
00:18:55.824     iops        : min=  422, max= 1779, avg=740.70, stdev=403.69, samples=20
00:18:55.824    lat (msec)   : 20=0.92%, 50=27.07%, 100=31.65%, 250=40.35%
00:18:55.824    cpu          : usr=0.19%, sys=2.39%, ctx=1493, majf=0, minf=4097
00:18:55.824    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2%
00:18:55.824       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:18:55.824       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:18:55.824       issued rwts: total=7472,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:18:55.824       latency   : target=0, window=0, percentile=100.00%, depth=64
00:18:55.824  job4: (groupid=0, jobs=1): err= 0: pid=80359: Mon Dec 16 06:29:10 2024
00:18:55.824    read: IOPS=738, BW=185MiB/s (193MB/s)(1862MiB/10089msec)
00:18:55.824      slat (usec): min=16, max=68020, avg=1321.95, stdev=4754.37
00:18:55.824      clat (msec): min=19, max=176, avg=85.24, stdev=30.19
00:18:55.824       lat (msec): min=19, max=177, avg=86.56, stdev=30.82
00:18:55.824      clat percentiles (msec):
00:18:55.824       |  1.00th=[   25],  5.00th=[   33], 10.00th=[   39], 20.00th=[   58],
00:18:55.824       | 30.00th=[   70], 40.00th=[   78], 50.00th=[   88], 60.00th=[  101],
00:18:55.824       | 70.00th=[  108], 80.00th=[  114], 90.00th=[  121], 95.00th=[  125],
00:18:55.824       | 99.00th=[  140], 99.50th=[  148], 99.90th=[  159], 99.95th=[  176],
00:18:55.824       | 99.99th=[  178]
00:18:55.824     bw (  KiB/s): min=136704, max=405504, per=10.72%, avg=188961.20, stdev=75628.86, samples=20
00:18:55.824     iops        : min=  534, max= 1584, avg=738.00, stdev=295.51, samples=20
00:18:55.824    lat (msec)   : 20=0.23%, 50=17.40%, 100=41.95%, 250=40.42%
00:18:55.824    cpu          : usr=0.29%, sys=2.20%, ctx=1634, majf=0, minf=4097
00:18:55.824    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2%
00:18:55.824       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:18:55.824       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:18:55.824       issued rwts: total=7447,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:18:55.824       latency   : target=0, window=0, percentile=100.00%, depth=64
00:18:55.824  job5: (groupid=0, jobs=1): err= 0: pid=80360: Mon Dec 16 06:29:10 2024
00:18:55.824    read: IOPS=660, BW=165MiB/s (173MB/s)(1665MiB/10082msec)
00:18:55.824      slat (usec): min=12, max=176259, avg=1415.92, stdev=5515.78
00:18:55.824      clat (msec): min=11, max=234, avg=95.32, stdev=39.61
00:18:55.824       lat (msec): min=11, max=406, avg=96.74, stdev=40.47
00:18:55.824      clat percentiles (msec):
00:18:55.824       |  1.00th=[   23],  5.00th=[   30], 10.00th=[   36], 20.00th=[   44],
00:18:55.824       | 30.00th=[   79], 40.00th=[  100], 50.00th=[  107], 60.00th=[  112],
00:18:55.824       | 70.00th=[  117], 80.00th=[  126], 90.00th=[  138], 95.00th=[  146],
00:18:55.824       | 99.00th=[  194], 99.50th=[  215], 99.90th=[  234], 99.95th=[  234],
00:18:55.824       | 99.99th=[  236]
00:18:55.824     bw (  KiB/s): min=109860, max=427008, per=9.57%, avg=168770.75, stdev=81710.54, samples=20
00:18:55.824     iops        : min=  429, max= 1668, avg=659.10, stdev=319.23, samples=20
00:18:55.824    lat (msec)   : 20=0.93%, 50=22.18%, 100=17.66%, 250=59.22%
00:18:55.824    cpu          : usr=0.24%, sys=2.09%, ctx=1297, majf=0, minf=4097
00:18:55.824    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1%
00:18:55.824       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:18:55.824       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:18:55.824       issued rwts: total=6658,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:18:55.824       latency   : target=0, window=0, percentile=100.00%, depth=64
00:18:55.824  job6: (groupid=0, jobs=1): err= 0: pid=80361: Mon Dec 16 06:29:10 2024
00:18:55.824    read: IOPS=477, BW=119MiB/s (125MB/s)(1208MiB/10111msec)
00:18:55.824      slat (usec): min=16, max=110635, avg=2024.01, stdev=6920.34
00:18:55.824      clat (msec): min=23, max=297, avg=131.75, stdev=33.73
00:18:55.824       lat (msec): min=23, max=339, avg=133.78, stdev=34.70
00:18:55.824      clat percentiles (msec):
00:18:55.824       |  1.00th=[   81],  5.00th=[   94], 10.00th=[  100], 20.00th=[  106],
00:18:55.824       | 30.00th=[  111], 40.00th=[  116], 50.00th=[  124], 60.00th=[  136],
00:18:55.824       | 70.00th=[  144], 80.00th=[  153], 90.00th=[  174], 95.00th=[  213],
00:18:55.824       | 99.00th=[  232], 99.50th=[  241], 99.90th=[  275], 99.95th=[  292],
00:18:55.824       | 99.99th=[  300]
00:18:55.824     bw (  KiB/s): min=72704, max=153804, per=6.92%, avg=121969.70, stdev=25574.12, samples=20
00:18:55.824     iops        : min=  284, max=  600, avg=476.40, stdev=99.85, samples=20
00:18:55.824    lat (msec)   : 50=0.21%, 100=10.91%, 250=88.55%, 500=0.33%
00:18:55.824    cpu          : usr=0.18%, sys=1.55%, ctx=1087, majf=0, minf=4097
00:18:55.824    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7%
00:18:55.824       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:18:55.824       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:18:55.824       issued rwts: total=4830,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:18:55.824       latency   : target=0, window=0, percentile=100.00%, depth=64
00:18:55.824  job7: (groupid=0, jobs=1): err= 0: pid=80362: Mon Dec 16 06:29:10 2024
00:18:55.824    read: IOPS=467, BW=117MiB/s (123MB/s)(1182MiB/10107msec)
00:18:55.824      slat (usec): min=20, max=169871, avg=2084.90, stdev=8398.39
00:18:55.824      clat (msec): min=68, max=372, avg=134.45, stdev=34.19
00:18:55.824       lat (msec): min=68, max=396, avg=136.54, stdev=35.48
00:18:55.824      clat percentiles (msec):
00:18:55.824       |  1.00th=[   86],  5.00th=[   96], 10.00th=[  102], 20.00th=[  107],
00:18:55.824       | 30.00th=[  111], 40.00th=[  117], 50.00th=[  126], 60.00th=[  140],
00:18:55.824       | 70.00th=[  146], 80.00th=[  157], 90.00th=[  176], 95.00th=[  218],
00:18:55.824       | 99.00th=[  241], 99.50th=[  245], 99.90th=[  279], 99.95th=[  279],
00:18:55.824       | 99.99th=[  372]
00:18:55.824     bw (  KiB/s): min=64512, max=152271, per=6.77%, avg=119392.55, stdev=25838.62, samples=20
00:18:55.824     iops        : min=  252, max=  594, avg=466.20, stdev=100.79, samples=20
00:18:55.825    lat (msec)   : 100=8.31%, 250=91.56%, 500=0.13%
00:18:55.825    cpu          : usr=0.20%, sys=1.51%, ctx=955, majf=0, minf=4097
00:18:55.825    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7%
00:18:55.825       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:18:55.825       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:18:55.825       issued rwts: total=4728,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:18:55.825       latency   : target=0, window=0, percentile=100.00%, depth=64
00:18:55.825  job8: (groupid=0, jobs=1): err= 0: pid=80363: Mon Dec 16 06:29:10 2024
00:18:55.825    read: IOPS=638, BW=160MiB/s (167MB/s)(1613MiB/10111msec)
00:18:55.825      slat (usec): min=15, max=95629, avg=1487.29, stdev=5392.71
00:18:55.825      clat (msec): min=3, max=299, avg=98.64, stdev=41.18
00:18:55.825       lat (msec): min=3, max=325, avg=100.12, stdev=42.08
00:18:55.825      clat percentiles (msec):
00:18:55.825       |  1.00th=[    6],  5.00th=[   45], 10.00th=[   61], 20.00th=[   69],
00:18:55.825       | 30.00th=[   75], 40.00th=[   86], 50.00th=[  100], 60.00th=[  108],
00:18:55.825       | 70.00th=[  113], 80.00th=[  120], 90.00th=[  144], 95.00th=[  169],
00:18:55.825       | 99.00th=[  232], 99.50th=[  236], 99.90th=[  257], 99.95th=[  259],
00:18:55.825       | 99.99th=[  300]
00:18:55.825     bw (  KiB/s): min=73362, max=308224, per=9.28%, avg=163526.25, stdev=56018.94, samples=20
00:18:55.825     iops        : min=  286, max= 1204, avg=638.70, stdev=218.89, samples=20
00:18:55.825    lat (msec)   : 4=0.20%, 10=1.83%, 20=1.29%, 50=1.95%, 100=45.61%
00:18:55.825    lat (msec)   : 250=49.01%, 500=0.11%
00:18:55.825    cpu          : usr=0.23%, sys=2.27%, ctx=1285, majf=0, minf=4098
00:18:55.825    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0%
00:18:55.825       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:18:55.825       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:18:55.825       issued rwts: total=6452,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:18:55.825       latency   : target=0, window=0, percentile=100.00%, depth=64
00:18:55.825  job9: (groupid=0, jobs=1): err= 0: pid=80365: Mon Dec 16 06:29:10 2024
00:18:55.825    read: IOPS=545, BW=136MiB/s (143MB/s)(1377MiB/10091msec)
00:18:55.825      slat (usec): min=16, max=183740, avg=1766.86, stdev=7440.28
00:18:55.825      clat (msec): min=13, max=262, avg=115.24, stdev=41.49
00:18:55.825       lat (msec): min=17, max=385, avg=117.00, stdev=42.56
00:18:55.825      clat percentiles (msec):
00:18:55.825       |  1.00th=[   51],  5.00th=[   65], 10.00th=[   71], 20.00th=[   79],
00:18:55.825       | 30.00th=[   86], 40.00th=[  101], 50.00th=[  111], 60.00th=[  120],
00:18:55.825       | 70.00th=[  130], 80.00th=[  146], 90.00th=[  167], 95.00th=[  211],
00:18:55.825       | 99.00th=[  226], 99.50th=[  232], 99.90th=[  264], 99.95th=[  264],
00:18:55.825       | 99.99th=[  264]
00:18:55.825     bw (  KiB/s): min=64512, max=219720, per=7.91%, avg=139352.25, stdev=46976.82, samples=20
00:18:55.825     iops        : min=  252, max=  858, avg=544.25, stdev=183.53, samples=20
00:18:55.825    lat (msec)   : 20=0.25%, 50=0.82%, 100=38.85%, 250=59.97%, 500=0.11%
00:18:55.825    cpu          : usr=0.17%, sys=1.73%, ctx=1115, majf=0, minf=4097
00:18:55.825    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9%
00:18:55.825       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:18:55.825       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:18:55.825       issued rwts: total=5508,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:18:55.825       latency   : target=0, window=0, percentile=100.00%, depth=64
00:18:55.825  job10: (groupid=0, jobs=1): err= 0: pid=80366: Mon Dec 16 06:29:10 2024
00:18:55.825    read: IOPS=528, BW=132MiB/s (139MB/s)(1334MiB/10090msec)
00:18:55.825      slat (usec): min=17, max=128955, avg=1845.53, stdev=7065.43
00:18:55.825      clat (msec): min=25, max=333, avg=118.98, stdev=43.19
00:18:55.825       lat (msec): min=25, max=347, avg=120.83, stdev=44.28
00:18:55.825      clat percentiles (msec):
00:18:55.825       |  1.00th=[   55],  5.00th=[   65], 10.00th=[   70], 20.00th=[   78],
00:18:55.825       | 30.00th=[   86], 40.00th=[  105], 50.00th=[  116], 60.00th=[  125],
00:18:55.825       | 70.00th=[  140], 80.00th=[  153], 90.00th=[  169], 95.00th=[  215],
00:18:55.825       | 99.00th=[  239], 99.50th=[  245], 99.90th=[  271], 99.95th=[  321],
00:18:55.825       | 99.99th=[  334]
00:18:55.825     bw (  KiB/s): min=68608, max=223808, per=7.65%, avg=134904.85, stdev=46733.29, samples=20
00:18:55.825     iops        : min=  268, max=  874, avg=526.90, stdev=182.49, samples=20
00:18:55.825    lat (msec)   : 50=0.30%, 100=37.30%, 250=62.14%, 500=0.26%
00:18:55.825    cpu          : usr=0.15%, sys=1.93%, ctx=1074, majf=0, minf=4097
00:18:55.825    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8%
00:18:55.825       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:18:55.825       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:18:55.825       issued rwts: total=5335,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:18:55.825       latency   : target=0, window=0, percentile=100.00%, depth=64
00:18:55.825  
00:18:55.825  Run status group 0 (all jobs):
00:18:55.825     READ: bw=1721MiB/s (1805MB/s), 117MiB/s-215MiB/s (123MB/s-225MB/s), io=17.0GiB (18.2GB), run=10029-10111msec
00:18:55.825  
00:18:55.825  Disk stats (read/write):
00:18:55.825    nvme0n1: ios=9652/0, merge=0/0, ticks=1243147/0, in_queue=1243147, util=97.40%
00:18:55.825    nvme10n1: ios=17196/0, merge=0/0, ticks=1231281/0, in_queue=1231281, util=97.47%
00:18:55.825    nvme1n1: ios=15178/0, merge=0/0, ticks=1238790/0, in_queue=1238790, util=97.40%
00:18:55.825    nvme2n1: ios=14817/0, merge=0/0, ticks=1232207/0, in_queue=1232207, util=97.52%
00:18:55.825    nvme3n1: ios=14766/0, merge=0/0, ticks=1233692/0, in_queue=1233692, util=97.64%
00:18:55.825    nvme4n1: ios=13189/0, merge=0/0, ticks=1239461/0, in_queue=1239461, util=97.89%
00:18:55.825    nvme5n1: ios=9532/0, merge=0/0, ticks=1240486/0, in_queue=1240486, util=98.09%
00:18:55.825    nvme6n1: ios=9349/0, merge=0/0, ticks=1239423/0, in_queue=1239423, util=98.38%
00:18:55.825    nvme7n1: ios=12790/0, merge=0/0, ticks=1239078/0, in_queue=1239078, util=98.75%
00:18:55.825    nvme8n1: ios=10912/0, merge=0/0, ticks=1239651/0, in_queue=1239651, util=98.55%
00:18:55.825    nvme9n1: ios=10556/0, merge=0/0, ticks=1238635/0, in_queue=1238635, util=98.97%
00:18:55.825   06:29:10	-- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10
00:18:55.825  [global]
00:18:55.825  thread=1
00:18:55.825  invalidate=1
00:18:55.825  rw=randwrite
00:18:55.825  time_based=1
00:18:55.825  runtime=10
00:18:55.825  ioengine=libaio
00:18:55.825  direct=1
00:18:55.825  bs=262144
00:18:55.825  iodepth=64
00:18:55.825  norandommap=1
00:18:55.825  numjobs=1
00:18:55.825  
00:18:55.825  [job0]
00:18:55.825  filename=/dev/nvme0n1
00:18:55.825  [job1]
00:18:55.825  filename=/dev/nvme10n1
00:18:55.825  [job2]
00:18:55.825  filename=/dev/nvme1n1
00:18:55.825  [job3]
00:18:55.825  filename=/dev/nvme2n1
00:18:55.825  [job4]
00:18:55.825  filename=/dev/nvme3n1
00:18:55.825  [job5]
00:18:55.825  filename=/dev/nvme4n1
00:18:55.825  [job6]
00:18:55.825  filename=/dev/nvme5n1
00:18:55.825  [job7]
00:18:55.825  filename=/dev/nvme6n1
00:18:55.825  [job8]
00:18:55.825  filename=/dev/nvme7n1
00:18:55.825  [job9]
00:18:55.825  filename=/dev/nvme8n1
00:18:55.825  [job10]
00:18:55.825  filename=/dev/nvme9n1
00:18:55.825  Could not set queue depth (nvme0n1)
00:18:55.825  Could not set queue depth (nvme10n1)
00:18:55.825  Could not set queue depth (nvme1n1)
00:18:55.825  Could not set queue depth (nvme2n1)
00:18:55.825  Could not set queue depth (nvme3n1)
00:18:55.825  Could not set queue depth (nvme4n1)
00:18:55.825  Could not set queue depth (nvme5n1)
00:18:55.825  Could not set queue depth (nvme6n1)
00:18:55.825  Could not set queue depth (nvme7n1)
00:18:55.825  Could not set queue depth (nvme8n1)
00:18:55.825  Could not set queue depth (nvme9n1)
00:18:55.825  job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:18:55.825  job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:18:55.825  job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:18:55.825  job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:18:55.825  job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:18:55.825  job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:18:55.825  job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:18:55.825  job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:18:55.825  job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:18:55.825  job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:18:55.825  job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:18:55.825  fio-3.35
00:18:55.825  Starting 11 threads
00:19:05.819  
00:19:05.820  job0: (groupid=0, jobs=1): err= 0: pid=80563: Mon Dec 16 06:29:21 2024
00:19:05.820    write: IOPS=245, BW=61.4MiB/s (64.4MB/s)(626MiB/10204msec); 0 zone resets
00:19:05.820      slat (usec): min=18, max=40498, avg=3895.76, stdev=7038.89
00:19:05.820      clat (msec): min=24, max=438, avg=256.69, stdev=36.75
00:19:05.820       lat (msec): min=24, max=438, avg=260.58, stdev=36.79
00:19:05.820      clat percentiles (msec):
00:19:05.820       |  1.00th=[   61],  5.00th=[  220], 10.00th=[  239], 20.00th=[  247],
00:19:05.820       | 30.00th=[  253], 40.00th=[  259], 50.00th=[  262], 60.00th=[  264],
00:19:05.820       | 70.00th=[  271], 80.00th=[  275], 90.00th=[  279], 95.00th=[  284],
00:19:05.820       | 99.00th=[  347], 99.50th=[  393], 99.90th=[  426], 99.95th=[  439],
00:19:05.820       | 99.99th=[  439]
00:19:05.820     bw (  KiB/s): min=57344, max=78336, per=4.57%, avg=62471.55, stdev=4536.64, samples=20
00:19:05.820     iops        : min=  224, max=  306, avg=243.95, stdev=17.77, samples=20
00:19:05.820    lat (msec)   : 50=0.80%, 100=1.00%, 250=23.07%, 500=75.13%
00:19:05.820    cpu          : usr=0.51%, sys=0.67%, ctx=2361, majf=0, minf=1
00:19:05.820    IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5%
00:19:05.820       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:19:05.820       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:19:05.820       issued rwts: total=0,2505,0,0 short=0,0,0,0 dropped=0,0,0,0
00:19:05.820       latency   : target=0, window=0, percentile=100.00%, depth=64
00:19:05.820  job1: (groupid=0, jobs=1): err= 0: pid=80564: Mon Dec 16 06:29:21 2024
00:19:05.820    write: IOPS=627, BW=157MiB/s (164MB/s)(1581MiB/10083msec); 0 zone resets
00:19:05.820      slat (usec): min=18, max=14496, avg=1553.97, stdev=2687.46
00:19:05.820      clat (msec): min=20, max=179, avg=100.45, stdev=13.10
00:19:05.820       lat (msec): min=20, max=179, avg=102.00, stdev=12.99
00:19:05.820      clat percentiles (msec):
00:19:05.820       |  1.00th=[   90],  5.00th=[   92], 10.00th=[   92], 20.00th=[   94],
00:19:05.820       | 30.00th=[   97], 40.00th=[   97], 50.00th=[   99], 60.00th=[  100],
00:19:05.820       | 70.00th=[  100], 80.00th=[  101], 90.00th=[  103], 95.00th=[  140],
00:19:05.820       | 99.00th=[  144], 99.50th=[  148], 99.90th=[  167], 99.95th=[  174],
00:19:05.820       | 99.99th=[  180]
00:19:05.820     bw (  KiB/s): min=116502, max=168448, per=11.72%, avg=160186.40, stdev=15950.34, samples=20
00:19:05.820     iops        : min=  455, max=  658, avg=625.55, stdev=62.25, samples=20
00:19:05.820    lat (msec)   : 50=0.27%, 100=80.68%, 250=19.05%
00:19:05.820    cpu          : usr=1.16%, sys=1.53%, ctx=7954, majf=0, minf=1
00:19:05.820    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0%
00:19:05.820       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:19:05.820       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:19:05.820       issued rwts: total=0,6324,0,0 short=0,0,0,0 dropped=0,0,0,0
00:19:05.820       latency   : target=0, window=0, percentile=100.00%, depth=64
00:19:05.820  job2: (groupid=0, jobs=1): err= 0: pid=80576: Mon Dec 16 06:29:21 2024
00:19:05.820    write: IOPS=205, BW=51.4MiB/s (53.9MB/s)(524MiB/10189msec); 0 zone resets
00:19:05.820      slat (usec): min=18, max=102024, avg=4653.88, stdev=9838.45
00:19:05.820      clat (msec): min=106, max=421, avg=306.46, stdev=40.52
00:19:05.820       lat (msec): min=106, max=421, avg=311.12, stdev=40.07
00:19:05.820      clat percentiles (msec):
00:19:05.820       |  1.00th=[  171],  5.00th=[  228], 10.00th=[  247], 20.00th=[  275],
00:19:05.820       | 30.00th=[  292], 40.00th=[  313], 50.00th=[  321], 60.00th=[  330],
00:19:05.820       | 70.00th=[  334], 80.00th=[  338], 90.00th=[  342], 95.00th=[  347],
00:19:05.820       | 99.00th=[  359], 99.50th=[  372], 99.90th=[  405], 99.95th=[  422],
00:19:05.820       | 99.99th=[  422]
00:19:05.820     bw (  KiB/s): min=41984, max=68982, per=3.81%, avg=52001.75, stdev=6153.71, samples=20
00:19:05.820     iops        : min=  164, max=  269, avg=203.00, stdev=23.98, samples=20
00:19:05.820    lat (msec)   : 250=10.98%, 500=89.02%
00:19:05.820    cpu          : usr=0.31%, sys=0.80%, ctx=2039, majf=0, minf=1
00:19:05.820    IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0%
00:19:05.820       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:19:05.820       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:19:05.820       issued rwts: total=0,2095,0,0 short=0,0,0,0 dropped=0,0,0,0
00:19:05.820       latency   : target=0, window=0, percentile=100.00%, depth=64
00:19:05.820  job3: (groupid=0, jobs=1): err= 0: pid=80577: Mon Dec 16 06:29:21 2024
00:19:05.820    write: IOPS=624, BW=156MiB/s (164MB/s)(1576MiB/10088msec); 0 zone resets
00:19:05.820      slat (usec): min=18, max=15496, avg=1581.99, stdev=2719.03
00:19:05.820      clat (msec): min=2, max=179, avg=100.75, stdev=13.96
00:19:05.820       lat (msec): min=8, max=179, avg=102.33, stdev=13.90
00:19:05.820      clat percentiles (msec):
00:19:05.820       |  1.00th=[   90],  5.00th=[   92], 10.00th=[   92], 20.00th=[   94],
00:19:05.820       | 30.00th=[   97], 40.00th=[   97], 50.00th=[   99], 60.00th=[   99],
00:19:05.820       | 70.00th=[  100], 80.00th=[  101], 90.00th=[  103], 95.00th=[  142],
00:19:05.820       | 99.00th=[  153], 99.50th=[  159], 99.90th=[  167], 99.95th=[  174],
00:19:05.820       | 99.99th=[  180]
00:19:05.820     bw (  KiB/s): min=107008, max=168448, per=11.69%, avg=159680.05, stdev=17649.77, samples=20
00:19:05.820     iops        : min=  418, max=  658, avg=623.55, stdev=68.94, samples=20
00:19:05.820    lat (msec)   : 4=0.02%, 20=0.06%, 50=0.19%, 100=80.96%, 250=18.77%
00:19:05.820    cpu          : usr=1.01%, sys=1.69%, ctx=7325, majf=0, minf=1
00:19:05.820    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0%
00:19:05.820       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:19:05.820       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:19:05.820       issued rwts: total=0,6303,0,0 short=0,0,0,0 dropped=0,0,0,0
00:19:05.820       latency   : target=0, window=0, percentile=100.00%, depth=64
00:19:05.820  job4: (groupid=0, jobs=1): err= 0: pid=80578: Mon Dec 16 06:29:21 2024
00:19:05.820    write: IOPS=202, BW=50.7MiB/s (53.1MB/s)(517MiB/10202msec); 0 zone resets
00:19:05.820      slat (usec): min=19, max=89248, avg=4782.09, stdev=10295.95
00:19:05.820      clat (msec): min=2, max=462, avg=310.94, stdev=58.59
00:19:05.820       lat (msec): min=2, max=462, avg=315.72, stdev=58.53
00:19:05.820      clat percentiles (msec):
00:19:05.820       |  1.00th=[   34],  5.00th=[  230], 10.00th=[  262], 20.00th=[  284],
00:19:05.820       | 30.00th=[  305], 40.00th=[  321], 50.00th=[  330], 60.00th=[  338],
00:19:05.820       | 70.00th=[  342], 80.00th=[  347], 90.00th=[  355], 95.00th=[  359],
00:19:05.820       | 99.00th=[  368], 99.50th=[  401], 99.90th=[  447], 99.95th=[  464],
00:19:05.820       | 99.99th=[  464]
00:19:05.820     bw (  KiB/s): min=43008, max=71168, per=3.75%, avg=51278.00, stdev=6478.70, samples=20
00:19:05.820     iops        : min=  168, max=  278, avg=200.20, stdev=25.34, samples=20
00:19:05.820    lat (msec)   : 4=0.10%, 10=0.05%, 20=0.39%, 50=0.82%, 100=1.35%
00:19:05.820    lat (msec)   : 250=5.08%, 500=92.21%
00:19:05.820    cpu          : usr=0.40%, sys=0.76%, ctx=1678, majf=0, minf=1
00:19:05.820    IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0%
00:19:05.820       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:19:05.820       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:19:05.820       issued rwts: total=0,2067,0,0 short=0,0,0,0 dropped=0,0,0,0
00:19:05.820       latency   : target=0, window=0, percentile=100.00%, depth=64
00:19:05.820  job5: (groupid=0, jobs=1): err= 0: pid=80583: Mon Dec 16 06:29:21 2024
00:19:05.820    write: IOPS=1270, BW=318MiB/s (333MB/s)(3189MiB/10043msec); 0 zone resets
00:19:05.820      slat (usec): min=21, max=41697, avg=771.45, stdev=1452.98
00:19:05.820      clat (usec): min=1624, max=199397, avg=49603.42, stdev=19175.99
00:19:05.820       lat (usec): min=1699, max=199457, avg=50374.87, stdev=19443.92
00:19:05.820      clat percentiles (msec):
00:19:05.820       |  1.00th=[   25],  5.00th=[   44], 10.00th=[   44], 20.00th=[   45],
00:19:05.820       | 30.00th=[   46], 40.00th=[   46], 50.00th=[   47], 60.00th=[   47],
00:19:05.820       | 70.00th=[   48], 80.00th=[   48], 90.00th=[   49], 95.00th=[   51],
00:19:05.820       | 99.00th=[  144], 99.50th=[  146], 99.90th=[  174], 99.95th=[  182],
00:19:05.820       | 99.99th=[  190]
00:19:05.820     bw (  KiB/s): min=99527, max=355840, per=23.77%, avg=324776.45, stdev=74236.65, samples=20
00:19:05.820     iops        : min=  388, max= 1390, avg=1268.55, stdev=290.09, samples=20
00:19:05.820    lat (msec)   : 2=0.01%, 4=0.09%, 10=0.25%, 20=0.45%, 50=94.03%
00:19:05.820    lat (msec)   : 100=1.25%, 250=3.93%
00:19:05.820    cpu          : usr=2.89%, sys=2.23%, ctx=17528, majf=0, minf=1
00:19:05.820    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5%
00:19:05.820       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:19:05.820       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:19:05.820       issued rwts: total=0,12756,0,0 short=0,0,0,0 dropped=0,0,0,0
00:19:05.820       latency   : target=0, window=0, percentile=100.00%, depth=64
00:19:05.820  job6: (groupid=0, jobs=1): err= 0: pid=80584: Mon Dec 16 06:29:21 2024
00:19:05.820    write: IOPS=298, BW=74.6MiB/s (78.2MB/s)(752MiB/10076msec); 0 zone resets
00:19:05.820      slat (usec): min=19, max=106808, avg=3229.76, stdev=8006.97
00:19:05.820      clat (msec): min=2, max=362, avg=211.12, stdev=119.02
00:19:05.820       lat (msec): min=3, max=362, avg=214.35, stdev=120.71
00:19:05.820      clat percentiles (msec):
00:19:05.820       |  1.00th=[   15],  5.00th=[   54], 10.00th=[   82], 20.00th=[   85],
00:19:05.820       | 30.00th=[   88], 40.00th=[   89], 50.00th=[  275], 60.00th=[  305],
00:19:05.820       | 70.00th=[  317], 80.00th=[  330], 90.00th=[  342], 95.00th=[  347],
00:19:05.820       | 99.00th=[  351], 99.50th=[  351], 99.90th=[  355], 99.95th=[  363],
00:19:05.820       | 99.99th=[  363]
00:19:05.820     bw (  KiB/s): min=43008, max=192000, per=5.51%, avg=75315.65, stdev=51978.03, samples=20
00:19:05.820     iops        : min=  168, max=  750, avg=294.05, stdev=203.03, samples=20
00:19:05.820    lat (msec)   : 4=0.07%, 10=0.47%, 20=1.13%, 50=2.89%, 100=37.15%
00:19:05.820    lat (msec)   : 250=4.06%, 500=54.24%
00:19:05.820    cpu          : usr=0.66%, sys=0.90%, ctx=3341, majf=0, minf=1
00:19:05.820    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9%
00:19:05.820       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:19:05.820       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:19:05.820       issued rwts: total=0,3007,0,0 short=0,0,0,0 dropped=0,0,0,0
00:19:05.820       latency   : target=0, window=0, percentile=100.00%, depth=64
00:19:05.820  job7: (groupid=0, jobs=1): err= 0: pid=80585: Mon Dec 16 06:29:21 2024
00:19:05.820    write: IOPS=1311, BW=328MiB/s (344MB/s)(3304MiB/10080msec); 0 zone resets
00:19:05.820      slat (usec): min=22, max=10910, avg=753.77, stdev=1314.72
00:19:05.820      clat (msec): min=15, max=162, avg=48.04, stdev=13.75
00:19:05.820       lat (msec): min=15, max=162, avg=48.79, stdev=13.92
00:19:05.820      clat percentiles (msec):
00:19:05.820       |  1.00th=[   40],  5.00th=[   41], 10.00th=[   42], 20.00th=[   43],
00:19:05.820       | 30.00th=[   43], 40.00th=[   44], 50.00th=[   44], 60.00th=[   45],
00:19:05.820       | 70.00th=[   45], 80.00th=[   46], 90.00th=[   74], 95.00th=[   87],
00:19:05.820       | 99.00th=[   90], 99.50th=[  103], 99.90th=[  146], 99.95th=[  153],
00:19:05.821       | 99.99th=[  159]
00:19:05.821     bw (  KiB/s): min=185344, max=393728, per=24.64%, avg=336590.50, stdev=67080.57, samples=20
00:19:05.821     iops        : min=  724, max= 1538, avg=1314.75, stdev=262.02, samples=20
00:19:05.821    lat (msec)   : 20=0.03%, 50=89.29%, 100=10.16%, 250=0.51%
00:19:05.821    cpu          : usr=2.93%, sys=2.32%, ctx=17646, majf=0, minf=1
00:19:05.821    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5%
00:19:05.821       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:19:05.821       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:19:05.821       issued rwts: total=0,13217,0,0 short=0,0,0,0 dropped=0,0,0,0
00:19:05.821       latency   : target=0, window=0, percentile=100.00%, depth=64
00:19:05.821  job8: (groupid=0, jobs=1): err= 0: pid=80586: Mon Dec 16 06:29:21 2024
00:19:05.821    write: IOPS=199, BW=49.9MiB/s (52.3MB/s)(509MiB/10200msec); 0 zone resets
00:19:05.821      slat (usec): min=23, max=189131, avg=4911.99, stdev=11056.32
00:19:05.821      clat (msec): min=3, max=473, avg=315.87, stdev=57.37
00:19:05.821       lat (msec): min=3, max=473, avg=320.78, stdev=57.08
00:19:05.821      clat percentiles (msec):
00:19:05.821       |  1.00th=[   55],  5.00th=[  243], 10.00th=[  262], 20.00th=[  279],
00:19:05.821       | 30.00th=[  305], 40.00th=[  330], 50.00th=[  334], 60.00th=[  342],
00:19:05.821       | 70.00th=[  347], 80.00th=[  351], 90.00th=[  359], 95.00th=[  359],
00:19:05.821       | 99.00th=[  409], 99.50th=[  435], 99.90th=[  460], 99.95th=[  472],
00:19:05.821       | 99.99th=[  472]
00:19:05.821     bw (  KiB/s): min=42496, max=61440, per=3.69%, avg=50428.00, stdev=5774.92, samples=20
00:19:05.821     iops        : min=  166, max=  240, avg=196.80, stdev=22.61, samples=20
00:19:05.821    lat (msec)   : 4=0.20%, 20=0.10%, 50=0.59%, 100=1.38%, 250=4.62%
00:19:05.821    lat (msec)   : 500=93.12%
00:19:05.821    cpu          : usr=0.45%, sys=0.71%, ctx=2310, majf=0, minf=1
00:19:05.821    IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9%
00:19:05.821       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:19:05.821       complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:19:05.821       issued rwts: total=0,2034,0,0 short=0,0,0,0 dropped=0,0,0,0
00:19:05.821       latency   : target=0, window=0, percentile=100.00%, depth=64
00:19:05.821  job9: (groupid=0, jobs=1): err= 0: pid=80587: Mon Dec 16 06:29:21 2024
00:19:05.821    write: IOPS=201, BW=50.4MiB/s (52.9MB/s)(514MiB/10193msec); 0 zone resets
00:19:05.821      slat (usec): min=20, max=76427, avg=4858.38, stdev=9994.49
00:19:05.821      clat (msec): min=4, max=490, avg=312.24, stdev=48.81
00:19:05.821       lat (msec): min=4, max=490, avg=317.10, stdev=48.29
00:19:05.821      clat percentiles (msec):
00:19:05.821       |  1.00th=[   94],  5.00th=[  239], 10.00th=[  264], 20.00th=[  288],
00:19:05.821       | 30.00th=[  300], 40.00th=[  317], 50.00th=[  330], 60.00th=[  334],
00:19:05.821       | 70.00th=[  338], 80.00th=[  342], 90.00th=[  347], 95.00th=[  351],
00:19:05.821       | 99.00th=[  414], 99.50th=[  443], 99.90th=[  477], 99.95th=[  489],
00:19:05.821       | 99.99th=[  489]
00:19:05.821     bw (  KiB/s): min=43008, max=59392, per=3.73%, avg=50995.70, stdev=4524.69, samples=20
00:19:05.821     iops        : min=  168, max=  232, avg=199.05, stdev=17.72, samples=20
00:19:05.821    lat (msec)   : 10=0.34%, 100=0.78%, 250=5.79%, 500=93.09%
00:19:05.821    cpu          : usr=0.56%, sys=0.56%, ctx=683, majf=0, minf=1
00:19:05.821    IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9%
00:19:05.821       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:19:05.821       complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:19:05.821       issued rwts: total=0,2056,0,0 short=0,0,0,0 dropped=0,0,0,0
00:19:05.821       latency   : target=0, window=0, percentile=100.00%, depth=64
00:19:05.821  job10: (groupid=0, jobs=1): err= 0: pid=80588: Mon Dec 16 06:29:21 2024
00:19:05.821    write: IOPS=205, BW=51.3MiB/s (53.8MB/s)(524MiB/10200msec); 0 zone resets
00:19:05.821      slat (usec): min=16, max=92907, avg=4735.76, stdev=10235.14
00:19:05.821      clat (msec): min=9, max=482, avg=306.73, stdev=62.07
00:19:05.821       lat (msec): min=10, max=482, avg=311.47, stdev=62.10
00:19:05.821      clat percentiles (msec):
00:19:05.821       |  1.00th=[   52],  5.00th=[  199], 10.00th=[  251], 20.00th=[  279],
00:19:05.821       | 30.00th=[  296], 40.00th=[  317], 50.00th=[  326], 60.00th=[  334],
00:19:05.821       | 70.00th=[  342], 80.00th=[  347], 90.00th=[  351], 95.00th=[  355],
00:19:05.821       | 99.00th=[  401], 99.50th=[  435], 99.90th=[  468], 99.95th=[  485],
00:19:05.821       | 99.99th=[  485]
00:19:05.821     bw (  KiB/s): min=40960, max=83456, per=3.81%, avg=51998.60, stdev=8670.95, samples=20
00:19:05.821     iops        : min=  160, max=  326, avg=203.00, stdev=33.89, samples=20
00:19:05.821    lat (msec)   : 10=0.05%, 20=0.14%, 50=0.76%, 100=2.34%, 250=6.68%
00:19:05.821    lat (msec)   : 500=90.02%
00:19:05.821    cpu          : usr=0.34%, sys=0.38%, ctx=2258, majf=0, minf=1
00:19:05.821    IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0%
00:19:05.821       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:19:05.821       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:19:05.821       issued rwts: total=0,2095,0,0 short=0,0,0,0 dropped=0,0,0,0
00:19:05.821       latency   : target=0, window=0, percentile=100.00%, depth=64
00:19:05.821  
00:19:05.821  Run status group 0 (all jobs):
00:19:05.821    WRITE: bw=1334MiB/s (1399MB/s), 49.9MiB/s-328MiB/s (52.3MB/s-344MB/s), io=13.3GiB (14.3GB), run=10043-10204msec
00:19:05.821  
00:19:05.821  Disk stats (read/write):
00:19:05.821    nvme0n1: ios=49/4868, merge=0/0, ticks=52/1206632, in_queue=1206684, util=97.71%
00:19:05.821    nvme10n1: ios=49/12496, merge=0/0, ticks=30/1215697, in_queue=1215727, util=97.87%
00:19:05.821    nvme1n1: ios=26/4040, merge=0/0, ticks=23/1201603, in_queue=1201626, util=97.68%
00:19:05.821    nvme2n1: ios=25/12459, merge=0/0, ticks=28/1214419, in_queue=1214447, util=98.15%
00:19:05.821    nvme3n1: ios=0/4007, merge=0/0, ticks=0/1204149, in_queue=1204149, util=98.19%
00:19:05.821    nvme4n1: ios=0/25270, merge=0/0, ticks=0/1215364, in_queue=1215364, util=98.17%
00:19:05.821    nvme5n1: ios=0/5825, merge=0/0, ticks=0/1211335, in_queue=1211335, util=98.16%
00:19:05.821    nvme6n1: ios=0/26249, merge=0/0, ticks=0/1212833, in_queue=1212833, util=98.35%
00:19:05.821    nvme7n1: ios=0/3936, merge=0/0, ticks=0/1199798, in_queue=1199798, util=98.78%
00:19:05.821    nvme8n1: ios=0/3982, merge=0/0, ticks=0/1197354, in_queue=1197354, util=98.71%
00:19:05.821    nvme9n1: ios=0/4050, merge=0/0, ticks=0/1199699, in_queue=1199699, util=98.90%
00:19:05.821   06:29:21	-- target/multiconnection.sh@36 -- # sync
00:19:05.821    06:29:21	-- target/multiconnection.sh@37 -- # seq 1 11
00:19:05.821   06:29:21	-- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:19:05.821   06:29:21	-- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:19:05.821  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:19:05.821   06:29:21	-- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1
00:19:05.821   06:29:21	-- common/autotest_common.sh@1208 -- # local i=0
00:19:05.821   06:29:21	-- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL
00:19:05.821   06:29:21	-- common/autotest_common.sh@1209 -- # grep -q -w SPDK1
00:19:05.821   06:29:21	-- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL
00:19:05.821   06:29:21	-- common/autotest_common.sh@1216 -- # grep -q -w SPDK1
00:19:05.821   06:29:21	-- common/autotest_common.sh@1220 -- # return 0
00:19:05.821   06:29:21	-- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:19:05.821   06:29:21	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:05.821   06:29:21	-- common/autotest_common.sh@10 -- # set +x
00:19:05.821   06:29:21	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:05.821   06:29:21	-- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:19:05.821   06:29:21	-- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2
00:19:05.821  NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s)
00:19:05.821   06:29:21	-- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2
00:19:05.821   06:29:21	-- common/autotest_common.sh@1208 -- # local i=0
00:19:05.821   06:29:21	-- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL
00:19:05.821   06:29:21	-- common/autotest_common.sh@1209 -- # grep -q -w SPDK2
00:19:05.821   06:29:21	-- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL
00:19:05.821   06:29:21	-- common/autotest_common.sh@1216 -- # grep -q -w SPDK2
00:19:05.821   06:29:21	-- common/autotest_common.sh@1220 -- # return 0
00:19:05.821   06:29:21	-- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2
00:19:05.821   06:29:21	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:05.821   06:29:21	-- common/autotest_common.sh@10 -- # set +x
00:19:05.821   06:29:21	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:05.821   06:29:21	-- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:19:05.821   06:29:21	-- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3
00:19:05.821  NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s)
00:19:05.821   06:29:21	-- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3
00:19:05.821   06:29:21	-- common/autotest_common.sh@1208 -- # local i=0
00:19:05.821   06:29:21	-- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL
00:19:05.821   06:29:21	-- common/autotest_common.sh@1209 -- # grep -q -w SPDK3
00:19:05.821   06:29:21	-- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL
00:19:05.821   06:29:21	-- common/autotest_common.sh@1216 -- # grep -q -w SPDK3
00:19:05.821   06:29:21	-- common/autotest_common.sh@1220 -- # return 0
00:19:05.821   06:29:21	-- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3
00:19:05.821   06:29:21	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:05.821   06:29:21	-- common/autotest_common.sh@10 -- # set +x
00:19:05.821   06:29:21	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:05.821   06:29:21	-- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:19:05.821   06:29:21	-- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4
00:19:05.821  NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s)
00:19:05.821   06:29:22	-- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4
00:19:05.821   06:29:22	-- common/autotest_common.sh@1208 -- # local i=0
00:19:05.821   06:29:22	-- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL
00:19:05.821   06:29:22	-- common/autotest_common.sh@1209 -- # grep -q -w SPDK4
00:19:05.821   06:29:22	-- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL
00:19:05.821   06:29:22	-- common/autotest_common.sh@1216 -- # grep -q -w SPDK4
00:19:05.821   06:29:22	-- common/autotest_common.sh@1220 -- # return 0
00:19:05.821   06:29:22	-- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4
00:19:05.821   06:29:22	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:05.821   06:29:22	-- common/autotest_common.sh@10 -- # set +x
00:19:05.821   06:29:22	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:05.821   06:29:22	-- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:19:05.821   06:29:22	-- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5
00:19:05.821  NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s)
00:19:05.821   06:29:22	-- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5
00:19:05.821   06:29:22	-- common/autotest_common.sh@1208 -- # local i=0
00:19:05.821   06:29:22	-- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL
00:19:05.821   06:29:22	-- common/autotest_common.sh@1209 -- # grep -q -w SPDK5
00:19:05.821   06:29:22	-- common/autotest_common.sh@1216 -- # grep -q -w SPDK5
00:19:05.822   06:29:22	-- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL
00:19:05.822   06:29:22	-- common/autotest_common.sh@1220 -- # return 0
00:19:05.822   06:29:22	-- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5
00:19:05.822   06:29:22	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:05.822   06:29:22	-- common/autotest_common.sh@10 -- # set +x
00:19:05.822   06:29:22	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:05.822   06:29:22	-- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:19:05.822   06:29:22	-- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6
00:19:05.822  NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s)
00:19:05.822   06:29:22	-- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6
00:19:05.822   06:29:22	-- common/autotest_common.sh@1208 -- # local i=0
00:19:05.822   06:29:22	-- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL
00:19:05.822   06:29:22	-- common/autotest_common.sh@1209 -- # grep -q -w SPDK6
00:19:05.822   06:29:22	-- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL
00:19:05.822   06:29:22	-- common/autotest_common.sh@1216 -- # grep -q -w SPDK6
00:19:05.822   06:29:22	-- common/autotest_common.sh@1220 -- # return 0
00:19:05.822   06:29:22	-- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6
00:19:05.822   06:29:22	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:05.822   06:29:22	-- common/autotest_common.sh@10 -- # set +x
00:19:05.822   06:29:22	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:05.822   06:29:22	-- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:19:05.822   06:29:22	-- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7
00:19:05.822  NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s)
00:19:05.822   06:29:22	-- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7
00:19:05.822   06:29:22	-- common/autotest_common.sh@1208 -- # local i=0
00:19:05.822   06:29:22	-- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL
00:19:05.822   06:29:22	-- common/autotest_common.sh@1209 -- # grep -q -w SPDK7
00:19:05.822   06:29:22	-- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL
00:19:05.822   06:29:22	-- common/autotest_common.sh@1216 -- # grep -q -w SPDK7
00:19:05.822   06:29:22	-- common/autotest_common.sh@1220 -- # return 0
00:19:05.822   06:29:22	-- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7
00:19:05.822   06:29:22	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:05.822   06:29:22	-- common/autotest_common.sh@10 -- # set +x
00:19:05.822   06:29:22	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:05.822   06:29:22	-- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:19:05.822   06:29:22	-- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8
00:19:05.822  NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s)
00:19:05.822   06:29:22	-- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8
00:19:05.822   06:29:22	-- common/autotest_common.sh@1208 -- # local i=0
00:19:05.822   06:29:22	-- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL
00:19:05.822   06:29:22	-- common/autotest_common.sh@1209 -- # grep -q -w SPDK8
00:19:05.822   06:29:22	-- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL
00:19:05.822   06:29:22	-- common/autotest_common.sh@1216 -- # grep -q -w SPDK8
00:19:05.822   06:29:22	-- common/autotest_common.sh@1220 -- # return 0
00:19:05.822   06:29:22	-- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8
00:19:05.822   06:29:22	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:05.822   06:29:22	-- common/autotest_common.sh@10 -- # set +x
00:19:05.822   06:29:22	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:05.822   06:29:22	-- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:19:05.822   06:29:22	-- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9
00:19:05.822  NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s)
00:19:05.822   06:29:22	-- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9
00:19:05.822   06:29:22	-- common/autotest_common.sh@1208 -- # local i=0
00:19:05.822   06:29:22	-- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL
00:19:05.822   06:29:22	-- common/autotest_common.sh@1209 -- # grep -q -w SPDK9
00:19:05.822   06:29:22	-- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL
00:19:05.822   06:29:22	-- common/autotest_common.sh@1216 -- # grep -q -w SPDK9
00:19:05.822   06:29:22	-- common/autotest_common.sh@1220 -- # return 0
00:19:05.822   06:29:22	-- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9
00:19:05.822   06:29:22	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:05.822   06:29:22	-- common/autotest_common.sh@10 -- # set +x
00:19:05.822   06:29:22	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:05.822   06:29:22	-- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:19:05.822   06:29:22	-- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10
00:19:05.822  NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s)
00:19:05.822   06:29:22	-- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10
00:19:05.822   06:29:22	-- common/autotest_common.sh@1208 -- # local i=0
00:19:05.822   06:29:22	-- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL
00:19:05.822   06:29:22	-- common/autotest_common.sh@1209 -- # grep -q -w SPDK10
00:19:05.822   06:29:22	-- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL
00:19:05.822   06:29:22	-- common/autotest_common.sh@1216 -- # grep -q -w SPDK10
00:19:05.822   06:29:22	-- common/autotest_common.sh@1220 -- # return 0
00:19:05.822   06:29:22	-- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10
00:19:05.822   06:29:22	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:05.822   06:29:22	-- common/autotest_common.sh@10 -- # set +x
00:19:05.822   06:29:22	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:05.822   06:29:22	-- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:19:05.822   06:29:22	-- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11
00:19:05.822  NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s)
00:19:05.822   06:29:22	-- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11
00:19:05.822   06:29:22	-- common/autotest_common.sh@1208 -- # local i=0
00:19:05.822   06:29:22	-- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL
00:19:05.822   06:29:22	-- common/autotest_common.sh@1209 -- # grep -q -w SPDK11
00:19:05.822   06:29:22	-- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL
00:19:05.822   06:29:22	-- common/autotest_common.sh@1216 -- # grep -q -w SPDK11
00:19:05.822   06:29:22	-- common/autotest_common.sh@1220 -- # return 0
00:19:05.822   06:29:22	-- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11
00:19:05.822   06:29:22	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:05.822   06:29:22	-- common/autotest_common.sh@10 -- # set +x
00:19:05.822   06:29:22	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:05.822   06:29:22	-- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state
00:19:05.822   06:29:22	-- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:19:05.822   06:29:22	-- target/multiconnection.sh@47 -- # nvmftestfini
00:19:05.822   06:29:22	-- nvmf/common.sh@476 -- # nvmfcleanup
00:19:05.822   06:29:22	-- nvmf/common.sh@116 -- # sync
00:19:05.822   06:29:22	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:19:05.822   06:29:22	-- nvmf/common.sh@119 -- # set +e
00:19:05.822   06:29:22	-- nvmf/common.sh@120 -- # for i in {1..20}
00:19:05.822   06:29:22	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:19:05.822  rmmod nvme_tcp
00:19:05.822  rmmod nvme_fabrics
00:19:05.822  rmmod nvme_keyring
00:19:05.822   06:29:22	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:19:05.822   06:29:22	-- nvmf/common.sh@123 -- # set -e
00:19:05.822   06:29:22	-- nvmf/common.sh@124 -- # return 0
00:19:05.822   06:29:22	-- nvmf/common.sh@477 -- # '[' -n 79872 ']'
00:19:05.822   06:29:22	-- nvmf/common.sh@478 -- # killprocess 79872
00:19:05.822   06:29:22	-- common/autotest_common.sh@936 -- # '[' -z 79872 ']'
00:19:05.822   06:29:22	-- common/autotest_common.sh@940 -- # kill -0 79872
00:19:05.822    06:29:22	-- common/autotest_common.sh@941 -- # uname
00:19:05.822   06:29:22	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:19:05.822    06:29:22	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79872
00:19:06.081  killing process with pid 79872
00:19:06.081   06:29:22	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:19:06.081   06:29:22	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:19:06.081   06:29:22	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 79872'
00:19:06.082   06:29:22	-- common/autotest_common.sh@955 -- # kill 79872
00:19:06.082   06:29:22	-- common/autotest_common.sh@960 -- # wait 79872
00:19:06.650   06:29:23	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:19:06.650   06:29:23	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:19:06.650   06:29:23	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:19:06.650   06:29:23	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:19:06.650   06:29:23	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:19:06.650   06:29:23	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:19:06.650   06:29:23	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:19:06.650    06:29:23	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:19:06.650   06:29:23	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:19:06.650  
00:19:06.650  real	0m49.919s
00:19:06.650  user	2m48.669s
00:19:06.650  sys	0m23.549s
00:19:06.650   06:29:23	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:19:06.650   06:29:23	-- common/autotest_common.sh@10 -- # set +x
00:19:06.650  ************************************
00:19:06.650  END TEST nvmf_multiconnection
00:19:06.650  ************************************
00:19:06.650   06:29:23	-- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp
00:19:06.650   06:29:23	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:19:06.650   06:29:23	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:19:06.650   06:29:23	-- common/autotest_common.sh@10 -- # set +x
00:19:06.650  ************************************
00:19:06.650  START TEST nvmf_initiator_timeout
00:19:06.650  ************************************
00:19:06.651   06:29:23	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp
00:19:06.651  * Looking for test storage...
00:19:06.651  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:19:06.651    06:29:23	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:19:06.651     06:29:23	-- common/autotest_common.sh@1690 -- # lcov --version
00:19:06.651     06:29:23	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:19:06.651    06:29:23	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:19:06.651    06:29:23	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:19:06.651    06:29:23	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:19:06.651    06:29:23	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:19:06.651    06:29:23	-- scripts/common.sh@335 -- # IFS=.-:
00:19:06.651    06:29:23	-- scripts/common.sh@335 -- # read -ra ver1
00:19:06.651    06:29:23	-- scripts/common.sh@336 -- # IFS=.-:
00:19:06.651    06:29:23	-- scripts/common.sh@336 -- # read -ra ver2
00:19:06.651    06:29:23	-- scripts/common.sh@337 -- # local 'op=<'
00:19:06.651    06:29:23	-- scripts/common.sh@339 -- # ver1_l=2
00:19:06.651    06:29:23	-- scripts/common.sh@340 -- # ver2_l=1
00:19:06.651    06:29:23	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:19:06.651    06:29:23	-- scripts/common.sh@343 -- # case "$op" in
00:19:06.651    06:29:23	-- scripts/common.sh@344 -- # : 1
00:19:06.651    06:29:23	-- scripts/common.sh@363 -- # (( v = 0 ))
00:19:06.651    06:29:23	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:19:06.651     06:29:23	-- scripts/common.sh@364 -- # decimal 1
00:19:06.651     06:29:23	-- scripts/common.sh@352 -- # local d=1
00:19:06.651     06:29:23	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:19:06.651     06:29:23	-- scripts/common.sh@354 -- # echo 1
00:19:06.651    06:29:23	-- scripts/common.sh@364 -- # ver1[v]=1
00:19:06.651     06:29:23	-- scripts/common.sh@365 -- # decimal 2
00:19:06.651     06:29:23	-- scripts/common.sh@352 -- # local d=2
00:19:06.651     06:29:23	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:19:06.651     06:29:23	-- scripts/common.sh@354 -- # echo 2
00:19:06.651    06:29:23	-- scripts/common.sh@365 -- # ver2[v]=2
00:19:06.651    06:29:23	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:19:06.651    06:29:23	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:19:06.651    06:29:23	-- scripts/common.sh@367 -- # return 0
00:19:06.651    06:29:23	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:19:06.651    06:29:23	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:19:06.651  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:06.651  		--rc genhtml_branch_coverage=1
00:19:06.651  		--rc genhtml_function_coverage=1
00:19:06.651  		--rc genhtml_legend=1
00:19:06.651  		--rc geninfo_all_blocks=1
00:19:06.651  		--rc geninfo_unexecuted_blocks=1
00:19:06.651  		
00:19:06.651  		'
00:19:06.651    06:29:23	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:19:06.651  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:06.651  		--rc genhtml_branch_coverage=1
00:19:06.651  		--rc genhtml_function_coverage=1
00:19:06.651  		--rc genhtml_legend=1
00:19:06.651  		--rc geninfo_all_blocks=1
00:19:06.651  		--rc geninfo_unexecuted_blocks=1
00:19:06.651  		
00:19:06.651  		'
00:19:06.651    06:29:23	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:19:06.651  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:06.651  		--rc genhtml_branch_coverage=1
00:19:06.651  		--rc genhtml_function_coverage=1
00:19:06.651  		--rc genhtml_legend=1
00:19:06.651  		--rc geninfo_all_blocks=1
00:19:06.651  		--rc geninfo_unexecuted_blocks=1
00:19:06.651  		
00:19:06.651  		'
00:19:06.651    06:29:23	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:19:06.651  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:06.651  		--rc genhtml_branch_coverage=1
00:19:06.651  		--rc genhtml_function_coverage=1
00:19:06.651  		--rc genhtml_legend=1
00:19:06.651  		--rc geninfo_all_blocks=1
00:19:06.651  		--rc geninfo_unexecuted_blocks=1
00:19:06.651  		
00:19:06.651  		'
00:19:06.651   06:29:23	-- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:19:06.651     06:29:23	-- nvmf/common.sh@7 -- # uname -s
00:19:06.651    06:29:23	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:19:06.651    06:29:23	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:19:06.651    06:29:23	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:19:06.651    06:29:23	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:19:06.651    06:29:23	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:19:06.651    06:29:23	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:19:06.651    06:29:23	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:19:06.651    06:29:23	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:19:06.651    06:29:23	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:19:06.651     06:29:23	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:19:06.651    06:29:23	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:19:06.651    06:29:23	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:19:06.651    06:29:23	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:19:06.651    06:29:23	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:19:06.651    06:29:23	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:19:06.651    06:29:23	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:19:06.651     06:29:23	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:19:06.651     06:29:23	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:19:06.651     06:29:23	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:19:06.651      06:29:23	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:06.651      06:29:23	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:06.651      06:29:23	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:06.651      06:29:23	-- paths/export.sh@5 -- # export PATH
00:19:06.651      06:29:23	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:06.651    06:29:23	-- nvmf/common.sh@46 -- # : 0
00:19:06.651    06:29:23	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:19:06.651    06:29:23	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:19:06.651    06:29:23	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:19:06.651    06:29:23	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:19:06.651    06:29:23	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:19:06.651    06:29:23	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:19:06.651    06:29:23	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:19:06.651    06:29:23	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:19:06.651   06:29:23	-- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64
00:19:06.651   06:29:23	-- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:19:06.651   06:29:23	-- target/initiator_timeout.sh@14 -- # nvmftestinit
00:19:06.651   06:29:23	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:19:06.651   06:29:23	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:19:06.651   06:29:23	-- nvmf/common.sh@436 -- # prepare_net_devs
00:19:06.651   06:29:23	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:19:06.651   06:29:23	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:19:06.651   06:29:23	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:19:06.651   06:29:23	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:19:06.651    06:29:23	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:19:06.651   06:29:23	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:19:06.651   06:29:23	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:19:06.651   06:29:23	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:19:06.651   06:29:23	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:19:06.651   06:29:23	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:19:06.651   06:29:23	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:19:06.651   06:29:23	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:19:06.651   06:29:23	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:19:06.651   06:29:23	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:19:06.651   06:29:23	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:19:06.651   06:29:23	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:19:06.651   06:29:23	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:19:06.651   06:29:23	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:19:06.651   06:29:23	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:19:06.651   06:29:23	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:19:06.651   06:29:23	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:19:06.651   06:29:23	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:19:06.651   06:29:23	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:19:06.651   06:29:23	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:19:06.911   06:29:23	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:19:06.911  Cannot find device "nvmf_tgt_br"
00:19:06.911   06:29:23	-- nvmf/common.sh@154 -- # true
00:19:06.911   06:29:23	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:19:06.911  Cannot find device "nvmf_tgt_br2"
00:19:06.911   06:29:23	-- nvmf/common.sh@155 -- # true
00:19:06.911   06:29:23	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:19:06.911   06:29:23	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:19:06.911  Cannot find device "nvmf_tgt_br"
00:19:06.911   06:29:23	-- nvmf/common.sh@157 -- # true
00:19:06.911   06:29:23	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:19:06.911  Cannot find device "nvmf_tgt_br2"
00:19:06.911   06:29:23	-- nvmf/common.sh@158 -- # true
00:19:06.911   06:29:23	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:19:06.911   06:29:23	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:19:06.911   06:29:23	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:19:06.911  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:19:06.911   06:29:23	-- nvmf/common.sh@161 -- # true
00:19:06.911   06:29:23	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:19:06.911  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:19:06.911   06:29:23	-- nvmf/common.sh@162 -- # true
00:19:06.911   06:29:23	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:19:06.911   06:29:23	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:19:06.911   06:29:23	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:19:06.911   06:29:23	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:19:06.911   06:29:23	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:19:06.911   06:29:23	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:19:06.911   06:29:23	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:19:06.911   06:29:23	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:19:06.911   06:29:23	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:19:06.911   06:29:23	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:19:06.911   06:29:23	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:19:06.911   06:29:23	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:19:06.911   06:29:23	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:19:06.911   06:29:23	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:19:06.911   06:29:23	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:19:06.911   06:29:23	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:19:06.911   06:29:23	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:19:06.911   06:29:23	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:19:06.911   06:29:23	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:19:06.911   06:29:23	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:19:07.170   06:29:23	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:19:07.170   06:29:23	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:19:07.170   06:29:23	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:19:07.170   06:29:23	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:19:07.170  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:19:07.170  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms
00:19:07.170  
00:19:07.170  --- 10.0.0.2 ping statistics ---
00:19:07.170  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:19:07.170  rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms
00:19:07.170   06:29:23	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:19:07.170  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:19:07.170  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms
00:19:07.170  
00:19:07.170  --- 10.0.0.3 ping statistics ---
00:19:07.170  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:19:07.170  rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms
00:19:07.170   06:29:23	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:19:07.170  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:19:07.170  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms
00:19:07.170  
00:19:07.170  --- 10.0.0.1 ping statistics ---
00:19:07.170  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:19:07.170  rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms
00:19:07.170   06:29:23	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:19:07.170   06:29:23	-- nvmf/common.sh@421 -- # return 0
00:19:07.170   06:29:23	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:19:07.170   06:29:23	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:19:07.170   06:29:23	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:19:07.170   06:29:23	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:19:07.170   06:29:23	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:19:07.170   06:29:23	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:19:07.170   06:29:23	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:19:07.170   06:29:23	-- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF
00:19:07.171   06:29:23	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:19:07.171   06:29:23	-- common/autotest_common.sh@722 -- # xtrace_disable
00:19:07.171   06:29:23	-- common/autotest_common.sh@10 -- # set +x
00:19:07.171   06:29:23	-- nvmf/common.sh@469 -- # nvmfpid=80957
00:19:07.171   06:29:23	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:19:07.171   06:29:23	-- nvmf/common.sh@470 -- # waitforlisten 80957
00:19:07.171   06:29:23	-- common/autotest_common.sh@829 -- # '[' -z 80957 ']'
00:19:07.171   06:29:23	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:19:07.171   06:29:23	-- common/autotest_common.sh@834 -- # local max_retries=100
00:19:07.171   06:29:23	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:19:07.171  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:19:07.171   06:29:23	-- common/autotest_common.sh@838 -- # xtrace_disable
00:19:07.171   06:29:23	-- common/autotest_common.sh@10 -- # set +x
00:19:07.171  [2024-12-16 06:29:23.999563] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:19:07.171  [2024-12-16 06:29:23.999625] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:19:07.171  [2024-12-16 06:29:24.130454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:19:07.430  [2024-12-16 06:29:24.209104] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:19:07.430  [2024-12-16 06:29:24.209234] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:19:07.430  [2024-12-16 06:29:24.209246] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:19:07.430  [2024-12-16 06:29:24.209253] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:19:07.430  [2024-12-16 06:29:24.209571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:19:07.430  [2024-12-16 06:29:24.209681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:19:07.430  [2024-12-16 06:29:24.210229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:19:07.430  [2024-12-16 06:29:24.210277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:19:08.394   06:29:25	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:19:08.394   06:29:25	-- common/autotest_common.sh@862 -- # return 0
00:19:08.394   06:29:25	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:19:08.394   06:29:25	-- common/autotest_common.sh@728 -- # xtrace_disable
00:19:08.394   06:29:25	-- common/autotest_common.sh@10 -- # set +x
00:19:08.394   06:29:25	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:19:08.394   06:29:25	-- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT
00:19:08.394   06:29:25	-- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:19:08.394   06:29:25	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:08.394   06:29:25	-- common/autotest_common.sh@10 -- # set +x
00:19:08.394  Malloc0
00:19:08.394   06:29:25	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:08.394   06:29:25	-- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30
00:19:08.394   06:29:25	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:08.394   06:29:25	-- common/autotest_common.sh@10 -- # set +x
00:19:08.394  Delay0
00:19:08.394   06:29:25	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:08.394   06:29:25	-- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:19:08.394   06:29:25	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:08.394   06:29:25	-- common/autotest_common.sh@10 -- # set +x
00:19:08.394  [2024-12-16 06:29:25.124420] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:19:08.394   06:29:25	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:08.394   06:29:25	-- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:19:08.394   06:29:25	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:08.394   06:29:25	-- common/autotest_common.sh@10 -- # set +x
00:19:08.394   06:29:25	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:08.394   06:29:25	-- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:19:08.394   06:29:25	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:08.394   06:29:25	-- common/autotest_common.sh@10 -- # set +x
00:19:08.394   06:29:25	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:08.394   06:29:25	-- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:19:08.394   06:29:25	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:08.394   06:29:25	-- common/autotest_common.sh@10 -- # set +x
00:19:08.394  [2024-12-16 06:29:25.152657] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:19:08.394   06:29:25	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:08.394   06:29:25	-- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:19:08.394   06:29:25	-- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME
00:19:08.394   06:29:25	-- common/autotest_common.sh@1187 -- # local i=0
00:19:08.394   06:29:25	-- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0
00:19:08.394   06:29:25	-- common/autotest_common.sh@1189 -- # [[ -n '' ]]
00:19:08.394   06:29:25	-- common/autotest_common.sh@1194 -- # sleep 2
00:19:10.931   06:29:27	-- common/autotest_common.sh@1195 -- # (( i++ <= 15 ))
00:19:10.931    06:29:27	-- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME
00:19:10.931    06:29:27	-- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL
00:19:10.931   06:29:27	-- common/autotest_common.sh@1196 -- # nvme_devices=1
00:19:10.931   06:29:27	-- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter ))
00:19:10.931   06:29:27	-- common/autotest_common.sh@1197 -- # return 0
00:19:10.931   06:29:27	-- target/initiator_timeout.sh@35 -- # fio_pid=81045
00:19:10.931   06:29:27	-- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v
00:19:10.931   06:29:27	-- target/initiator_timeout.sh@37 -- # sleep 3
00:19:10.931  [global]
00:19:10.931  thread=1
00:19:10.931  invalidate=1
00:19:10.931  rw=write
00:19:10.931  time_based=1
00:19:10.931  runtime=60
00:19:10.931  ioengine=libaio
00:19:10.931  direct=1
00:19:10.931  bs=4096
00:19:10.931  iodepth=1
00:19:10.931  norandommap=0
00:19:10.931  numjobs=1
00:19:10.931  
00:19:10.931  verify_dump=1
00:19:10.931  verify_backlog=512
00:19:10.931  verify_state_save=0
00:19:10.931  do_verify=1
00:19:10.931  verify=crc32c-intel
00:19:10.931  [job0]
00:19:10.931  filename=/dev/nvme0n1
00:19:10.931  Could not set queue depth (nvme0n1)
00:19:10.931  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:19:10.931  fio-3.35
00:19:10.932  Starting 1 thread
00:19:13.464   06:29:30	-- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000
00:19:13.464   06:29:30	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:13.464   06:29:30	-- common/autotest_common.sh@10 -- # set +x
00:19:13.464  true
00:19:13.464   06:29:30	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:13.464   06:29:30	-- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000
00:19:13.464   06:29:30	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:13.464   06:29:30	-- common/autotest_common.sh@10 -- # set +x
00:19:13.464  true
00:19:13.464   06:29:30	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:13.464   06:29:30	-- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000
00:19:13.464   06:29:30	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:13.464   06:29:30	-- common/autotest_common.sh@10 -- # set +x
00:19:13.464  true
00:19:13.464   06:29:30	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:13.464   06:29:30	-- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000
00:19:13.464   06:29:30	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:13.464   06:29:30	-- common/autotest_common.sh@10 -- # set +x
00:19:13.464  true
00:19:13.465   06:29:30	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:13.465   06:29:30	-- target/initiator_timeout.sh@45 -- # sleep 3
00:19:16.757   06:29:33	-- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30
00:19:16.757   06:29:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:16.757   06:29:33	-- common/autotest_common.sh@10 -- # set +x
00:19:16.757  true
00:19:16.757   06:29:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:16.757   06:29:33	-- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30
00:19:16.757   06:29:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:16.757   06:29:33	-- common/autotest_common.sh@10 -- # set +x
00:19:16.757  true
00:19:16.757   06:29:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:16.757   06:29:33	-- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30
00:19:16.757   06:29:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:16.757   06:29:33	-- common/autotest_common.sh@10 -- # set +x
00:19:16.757  true
00:19:16.757   06:29:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:16.757   06:29:33	-- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30
00:19:16.757   06:29:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:16.757   06:29:33	-- common/autotest_common.sh@10 -- # set +x
00:19:16.757  true
00:19:16.757   06:29:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:16.757   06:29:33	-- target/initiator_timeout.sh@53 -- # fio_status=0
00:19:16.757   06:29:33	-- target/initiator_timeout.sh@54 -- # wait 81045
00:20:12.983  
00:20:12.983  job0: (groupid=0, jobs=1): err= 0: pid=81066: Mon Dec 16 06:30:27 2024
00:20:12.983    read: IOPS=844, BW=3379KiB/s (3460kB/s)(198MiB/60000msec)
00:20:12.983      slat (usec): min=11, max=16839, avg=14.23, stdev=82.30
00:20:12.983      clat (usec): min=154, max=40676k, avg=993.51, stdev=180669.06
00:20:12.983       lat (usec): min=166, max=40676k, avg=1007.74, stdev=180669.09
00:20:12.983      clat percentiles (usec):
00:20:12.983       |  1.00th=[  165],  5.00th=[  172], 10.00th=[  176], 20.00th=[  180],
00:20:12.983       | 30.00th=[  182], 40.00th=[  186], 50.00th=[  188], 60.00th=[  192],
00:20:12.983       | 70.00th=[  196], 80.00th=[  202], 90.00th=[  210], 95.00th=[  221],
00:20:12.983       | 99.00th=[  243], 99.50th=[  253], 99.90th=[  322], 99.95th=[  429],
00:20:12.983       | 99.99th=[  660]
00:20:12.983    write: IOPS=848, BW=3393KiB/s (3475kB/s)(199MiB/60000msec); 0 zone resets
00:20:12.983      slat (usec): min=16, max=805, avg=19.62, stdev= 7.51
00:20:12.983      clat (usec): min=117, max=2807, avg=152.73, stdev=25.21
00:20:12.983       lat (usec): min=137, max=2832, avg=172.35, stdev=26.69
00:20:12.983      clat percentiles (usec):
00:20:12.983       |  1.00th=[  133],  5.00th=[  137], 10.00th=[  139], 20.00th=[  143],
00:20:12.983       | 30.00th=[  145], 40.00th=[  149], 50.00th=[  151], 60.00th=[  153],
00:20:12.983       | 70.00th=[  155], 80.00th=[  161], 90.00th=[  169], 95.00th=[  178],
00:20:12.983       | 99.00th=[  202], 99.50th=[  212], 99.90th=[  269], 99.95th=[  363],
00:20:12.983       | 99.99th=[  644]
00:20:12.983     bw (  KiB/s): min= 1976, max=12288, per=100.00%, avg=10184.74, stdev=2119.41, samples=39
00:20:12.983     iops        : min=  494, max= 3072, avg=2546.18, stdev=529.85, samples=39
00:20:12.983    lat (usec)   : 250=99.61%, 500=0.35%, 750=0.02%, 1000=0.01%
00:20:12.983    lat (msec)   : 4=0.01%, >=2000=0.01%
00:20:12.983    cpu          : usr=0.55%, sys=2.06%, ctx=101663, majf=0, minf=5
00:20:12.983    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:20:12.983       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:20:12.983       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:20:12.983       issued rwts: total=50688,50901,0,0 short=0,0,0,0 dropped=0,0,0,0
00:20:12.983       latency   : target=0, window=0, percentile=100.00%, depth=1
00:20:12.983  
00:20:12.983  Run status group 0 (all jobs):
00:20:12.983     READ: bw=3379KiB/s (3460kB/s), 3379KiB/s-3379KiB/s (3460kB/s-3460kB/s), io=198MiB (208MB), run=60000-60000msec
00:20:12.983    WRITE: bw=3393KiB/s (3475kB/s), 3393KiB/s-3393KiB/s (3475kB/s-3475kB/s), io=199MiB (208MB), run=60000-60000msec
00:20:12.983  
00:20:12.983  Disk stats (read/write):
00:20:12.983    nvme0n1: ios=50718/50688, merge=0/0, ticks=9919/8255, in_queue=18174, util=99.89%
00:20:12.983   06:30:27	-- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:20:12.983  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:20:12.983   06:30:27	-- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:20:12.983   06:30:27	-- common/autotest_common.sh@1208 -- # local i=0
00:20:12.983   06:30:27	-- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL
00:20:12.983   06:30:27	-- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME
00:20:12.983   06:30:27	-- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL
00:20:12.983   06:30:27	-- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME
00:20:12.983   06:30:27	-- common/autotest_common.sh@1220 -- # return 0
00:20:12.983   06:30:27	-- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']'
00:20:12.983  nvmf hotplug test: fio successful as expected
00:20:12.983   06:30:27	-- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected'
00:20:12.983   06:30:27	-- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:20:12.983   06:30:27	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:12.983   06:30:27	-- common/autotest_common.sh@10 -- # set +x
00:20:12.983   06:30:27	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:12.983   06:30:27	-- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state
00:20:12.983   06:30:27	-- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT
00:20:12.983   06:30:27	-- target/initiator_timeout.sh@73 -- # nvmftestfini
00:20:12.983   06:30:27	-- nvmf/common.sh@476 -- # nvmfcleanup
00:20:12.983   06:30:27	-- nvmf/common.sh@116 -- # sync
00:20:12.983   06:30:27	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:20:12.983   06:30:27	-- nvmf/common.sh@119 -- # set +e
00:20:12.983   06:30:27	-- nvmf/common.sh@120 -- # for i in {1..20}
00:20:12.983   06:30:27	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:20:12.983  rmmod nvme_tcp
00:20:12.983  rmmod nvme_fabrics
00:20:12.983  rmmod nvme_keyring
00:20:12.983   06:30:27	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:20:12.983   06:30:27	-- nvmf/common.sh@123 -- # set -e
00:20:12.983   06:30:27	-- nvmf/common.sh@124 -- # return 0
00:20:12.983   06:30:27	-- nvmf/common.sh@477 -- # '[' -n 80957 ']'
00:20:12.983   06:30:27	-- nvmf/common.sh@478 -- # killprocess 80957
00:20:12.983   06:30:27	-- common/autotest_common.sh@936 -- # '[' -z 80957 ']'
00:20:12.983   06:30:27	-- common/autotest_common.sh@940 -- # kill -0 80957
00:20:12.983    06:30:27	-- common/autotest_common.sh@941 -- # uname
00:20:12.983   06:30:27	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:20:12.983    06:30:27	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80957
00:20:12.983   06:30:27	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:20:12.983   06:30:27	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:20:12.983  killing process with pid 80957
00:20:12.983   06:30:27	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 80957'
00:20:12.983   06:30:27	-- common/autotest_common.sh@955 -- # kill 80957
00:20:12.983   06:30:27	-- common/autotest_common.sh@960 -- # wait 80957
00:20:12.983   06:30:28	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:20:12.983   06:30:28	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:20:12.983   06:30:28	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:20:12.983   06:30:28	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:20:12.983   06:30:28	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:20:12.983   06:30:28	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:20:12.983   06:30:28	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:20:12.983    06:30:28	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:20:12.983   06:30:28	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:20:12.983  
00:20:12.983  real	1m4.720s
00:20:12.983  user	4m7.711s
00:20:12.983  sys	0m7.865s
00:20:12.983   06:30:28	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:20:12.983  ************************************
00:20:12.983  END TEST nvmf_initiator_timeout
00:20:12.983  ************************************
00:20:12.983   06:30:28	-- common/autotest_common.sh@10 -- # set +x
00:20:12.983   06:30:28	-- nvmf/nvmf.sh@69 -- # [[ virt == phy ]]
00:20:12.983   06:30:28	-- nvmf/nvmf.sh@86 -- # timing_exit target
00:20:12.983   06:30:28	-- common/autotest_common.sh@728 -- # xtrace_disable
00:20:12.983   06:30:28	-- common/autotest_common.sh@10 -- # set +x
00:20:12.983   06:30:28	-- nvmf/nvmf.sh@88 -- # timing_enter host
00:20:12.983   06:30:28	-- common/autotest_common.sh@722 -- # xtrace_disable
00:20:12.984   06:30:28	-- common/autotest_common.sh@10 -- # set +x
00:20:12.984   06:30:28	-- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]]
00:20:12.984   06:30:28	-- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp
00:20:12.984   06:30:28	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:20:12.984   06:30:28	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:20:12.984   06:30:28	-- common/autotest_common.sh@10 -- # set +x
00:20:12.984  ************************************
00:20:12.984  START TEST nvmf_multicontroller
00:20:12.984  ************************************
00:20:12.984   06:30:28	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp
00:20:12.984  * Looking for test storage...
00:20:12.984  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:20:12.984    06:30:28	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:20:12.984     06:30:28	-- common/autotest_common.sh@1690 -- # lcov --version
00:20:12.984     06:30:28	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:20:12.984    06:30:28	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:20:12.984    06:30:28	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:20:12.984    06:30:28	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:20:12.984    06:30:28	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:20:12.984    06:30:28	-- scripts/common.sh@335 -- # IFS=.-:
00:20:12.984    06:30:28	-- scripts/common.sh@335 -- # read -ra ver1
00:20:12.984    06:30:28	-- scripts/common.sh@336 -- # IFS=.-:
00:20:12.984    06:30:28	-- scripts/common.sh@336 -- # read -ra ver2
00:20:12.984    06:30:28	-- scripts/common.sh@337 -- # local 'op=<'
00:20:12.984    06:30:28	-- scripts/common.sh@339 -- # ver1_l=2
00:20:12.984    06:30:28	-- scripts/common.sh@340 -- # ver2_l=1
00:20:12.984    06:30:28	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:20:12.984    06:30:28	-- scripts/common.sh@343 -- # case "$op" in
00:20:12.984    06:30:28	-- scripts/common.sh@344 -- # : 1
00:20:12.984    06:30:28	-- scripts/common.sh@363 -- # (( v = 0 ))
00:20:12.984    06:30:28	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:12.984     06:30:28	-- scripts/common.sh@364 -- # decimal 1
00:20:12.984     06:30:28	-- scripts/common.sh@352 -- # local d=1
00:20:12.984     06:30:28	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:12.984     06:30:28	-- scripts/common.sh@354 -- # echo 1
00:20:12.984    06:30:28	-- scripts/common.sh@364 -- # ver1[v]=1
00:20:12.984     06:30:28	-- scripts/common.sh@365 -- # decimal 2
00:20:12.984     06:30:28	-- scripts/common.sh@352 -- # local d=2
00:20:12.984     06:30:28	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:12.984     06:30:28	-- scripts/common.sh@354 -- # echo 2
00:20:12.984    06:30:28	-- scripts/common.sh@365 -- # ver2[v]=2
00:20:12.984    06:30:28	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:20:12.984    06:30:28	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:20:12.984    06:30:28	-- scripts/common.sh@367 -- # return 0
00:20:12.984    06:30:28	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:12.984    06:30:28	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:20:12.984  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:12.984  		--rc genhtml_branch_coverage=1
00:20:12.984  		--rc genhtml_function_coverage=1
00:20:12.984  		--rc genhtml_legend=1
00:20:12.984  		--rc geninfo_all_blocks=1
00:20:12.984  		--rc geninfo_unexecuted_blocks=1
00:20:12.984  		
00:20:12.984  		'
00:20:12.984    06:30:28	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:20:12.984  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:12.984  		--rc genhtml_branch_coverage=1
00:20:12.984  		--rc genhtml_function_coverage=1
00:20:12.984  		--rc genhtml_legend=1
00:20:12.984  		--rc geninfo_all_blocks=1
00:20:12.984  		--rc geninfo_unexecuted_blocks=1
00:20:12.984  		
00:20:12.984  		'
00:20:12.984    06:30:28	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:20:12.984  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:12.984  		--rc genhtml_branch_coverage=1
00:20:12.984  		--rc genhtml_function_coverage=1
00:20:12.984  		--rc genhtml_legend=1
00:20:12.984  		--rc geninfo_all_blocks=1
00:20:12.984  		--rc geninfo_unexecuted_blocks=1
00:20:12.984  		
00:20:12.984  		'
00:20:12.984    06:30:28	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:20:12.984  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:12.984  		--rc genhtml_branch_coverage=1
00:20:12.984  		--rc genhtml_function_coverage=1
00:20:12.984  		--rc genhtml_legend=1
00:20:12.984  		--rc geninfo_all_blocks=1
00:20:12.984  		--rc geninfo_unexecuted_blocks=1
00:20:12.984  		
00:20:12.984  		'
00:20:12.984   06:30:28	-- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:20:12.984     06:30:28	-- nvmf/common.sh@7 -- # uname -s
00:20:12.984    06:30:28	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:20:12.984    06:30:28	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:20:12.984    06:30:28	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:20:12.984    06:30:28	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:20:12.984    06:30:28	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:20:12.984    06:30:28	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:20:12.984    06:30:28	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:20:12.984    06:30:28	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:20:12.984    06:30:28	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:20:12.984     06:30:28	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:20:12.984    06:30:28	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:20:12.984    06:30:28	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:20:12.984    06:30:28	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:20:12.984    06:30:28	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:20:12.984    06:30:28	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:20:12.984    06:30:28	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:20:12.984     06:30:28	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:20:12.984     06:30:28	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:20:12.984     06:30:28	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:20:12.984      06:30:28	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:12.984      06:30:28	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:12.984      06:30:28	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:12.984      06:30:28	-- paths/export.sh@5 -- # export PATH
00:20:12.984      06:30:28	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:12.984    06:30:28	-- nvmf/common.sh@46 -- # : 0
00:20:12.984    06:30:28	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:20:12.984    06:30:28	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:20:12.984    06:30:28	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:20:12.984    06:30:28	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:20:12.984    06:30:28	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:20:12.984    06:30:28	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:20:12.984    06:30:28	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:20:12.984    06:30:28	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:20:12.984   06:30:28	-- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64
00:20:12.984   06:30:28	-- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:20:12.984   06:30:28	-- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000
00:20:12.984   06:30:28	-- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001
00:20:12.984   06:30:28	-- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:20:12.984   06:30:28	-- host/multicontroller.sh@18 -- # '[' tcp == rdma ']'
00:20:12.984   06:30:28	-- host/multicontroller.sh@23 -- # nvmftestinit
00:20:12.984   06:30:28	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:20:12.984   06:30:28	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:20:12.984   06:30:28	-- nvmf/common.sh@436 -- # prepare_net_devs
00:20:12.984   06:30:28	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:20:12.985   06:30:28	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:20:12.985   06:30:28	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:20:12.985   06:30:28	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:20:12.985    06:30:28	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:20:12.985   06:30:28	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:20:12.985   06:30:28	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:20:12.985   06:30:28	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:20:12.985   06:30:28	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:20:12.985   06:30:28	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:20:12.985   06:30:28	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:20:12.985   06:30:28	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:20:12.985   06:30:28	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:20:12.985   06:30:28	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:20:12.985   06:30:28	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:20:12.985   06:30:28	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:20:12.985   06:30:28	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:20:12.985   06:30:28	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:20:12.985   06:30:28	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:20:12.985   06:30:28	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:20:12.985   06:30:28	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:20:12.985   06:30:28	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:20:12.985   06:30:28	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:20:12.985   06:30:28	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:20:12.985   06:30:28	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:20:12.985  Cannot find device "nvmf_tgt_br"
00:20:12.985   06:30:28	-- nvmf/common.sh@154 -- # true
00:20:12.985   06:30:28	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:20:12.985  Cannot find device "nvmf_tgt_br2"
00:20:12.985   06:30:28	-- nvmf/common.sh@155 -- # true
00:20:12.985   06:30:28	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:20:12.985   06:30:28	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:20:12.985  Cannot find device "nvmf_tgt_br"
00:20:12.985   06:30:28	-- nvmf/common.sh@157 -- # true
00:20:12.985   06:30:28	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:20:12.985  Cannot find device "nvmf_tgt_br2"
00:20:12.985   06:30:28	-- nvmf/common.sh@158 -- # true
00:20:12.985   06:30:28	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:20:12.985   06:30:28	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:20:12.985   06:30:28	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:20:12.985  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:20:12.985   06:30:28	-- nvmf/common.sh@161 -- # true
00:20:12.985   06:30:28	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:20:12.985  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:20:12.985   06:30:28	-- nvmf/common.sh@162 -- # true
00:20:12.985   06:30:28	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:20:12.985   06:30:28	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:20:12.985   06:30:28	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:20:12.985   06:30:28	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:20:12.985   06:30:28	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:20:12.985   06:30:28	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:20:12.985   06:30:28	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:20:12.985   06:30:28	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:20:12.985   06:30:28	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:20:12.985   06:30:28	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:20:12.985   06:30:28	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:20:12.985   06:30:28	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:20:12.985   06:30:28	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:20:12.985   06:30:28	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:20:12.985   06:30:28	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:20:12.985   06:30:28	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:20:12.985   06:30:28	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:20:12.985   06:30:28	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:20:12.985   06:30:28	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:20:12.985   06:30:28	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:20:12.985   06:30:28	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:20:12.985   06:30:28	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:20:12.985   06:30:28	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:20:12.985   06:30:28	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:20:12.985  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:20:12.985  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms
00:20:12.985  
00:20:12.985  --- 10.0.0.2 ping statistics ---
00:20:12.985  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:12.985  rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms
00:20:12.985   06:30:28	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:20:12.985  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:20:12.985  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms
00:20:12.985  
00:20:12.985  --- 10.0.0.3 ping statistics ---
00:20:12.985  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:12.985  rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms
00:20:12.985   06:30:28	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:20:12.985  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:20:12.985  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms
00:20:12.985  
00:20:12.985  --- 10.0.0.1 ping statistics ---
00:20:12.985  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:12.985  rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms
00:20:12.985   06:30:28	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:20:12.985   06:30:28	-- nvmf/common.sh@421 -- # return 0
00:20:12.985   06:30:28	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:20:12.985   06:30:28	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:20:12.985   06:30:28	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:20:12.985   06:30:28	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:20:12.985   06:30:28	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:20:12.985   06:30:28	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:20:12.985   06:30:28	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:20:12.985   06:30:28	-- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE
00:20:12.985   06:30:28	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:20:12.985   06:30:28	-- common/autotest_common.sh@722 -- # xtrace_disable
00:20:12.985   06:30:28	-- common/autotest_common.sh@10 -- # set +x
00:20:12.985   06:30:28	-- nvmf/common.sh@469 -- # nvmfpid=81905
00:20:12.985   06:30:28	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:20:12.985   06:30:28	-- nvmf/common.sh@470 -- # waitforlisten 81905
00:20:12.985   06:30:28	-- common/autotest_common.sh@829 -- # '[' -z 81905 ']'
00:20:12.985  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:12.985   06:30:28	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:12.985   06:30:28	-- common/autotest_common.sh@834 -- # local max_retries=100
00:20:12.985   06:30:28	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:12.985   06:30:28	-- common/autotest_common.sh@838 -- # xtrace_disable
00:20:12.985   06:30:28	-- common/autotest_common.sh@10 -- # set +x
00:20:12.985  [2024-12-16 06:30:28.860391] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:20:12.985  [2024-12-16 06:30:28.860604] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:20:12.985  [2024-12-16 06:30:28.992009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:20:12.985  [2024-12-16 06:30:29.080442] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:20:12.985  [2024-12-16 06:30:29.080853] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:20:12.985  [2024-12-16 06:30:29.080881] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:20:12.985  [2024-12-16 06:30:29.080892] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:20:12.985  [2024-12-16 06:30:29.081062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:20:12.985  [2024-12-16 06:30:29.081193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:20:12.985  [2024-12-16 06:30:29.081183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:20:12.985   06:30:29	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:20:12.985   06:30:29	-- common/autotest_common.sh@862 -- # return 0
00:20:12.985   06:30:29	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:20:12.985   06:30:29	-- common/autotest_common.sh@728 -- # xtrace_disable
00:20:12.985   06:30:29	-- common/autotest_common.sh@10 -- # set +x
00:20:12.985   06:30:29	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:20:12.986   06:30:29	-- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:20:12.986   06:30:29	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:12.986   06:30:29	-- common/autotest_common.sh@10 -- # set +x
00:20:12.986  [2024-12-16 06:30:29.924913] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:20:12.986   06:30:29	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:12.986   06:30:29	-- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:20:12.986   06:30:29	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:12.986   06:30:29	-- common/autotest_common.sh@10 -- # set +x
00:20:13.245  Malloc0
00:20:13.245   06:30:29	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:13.245   06:30:29	-- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:20:13.245   06:30:29	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:13.245   06:30:29	-- common/autotest_common.sh@10 -- # set +x
00:20:13.245   06:30:29	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:13.245   06:30:29	-- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:20:13.245   06:30:29	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:13.245   06:30:29	-- common/autotest_common.sh@10 -- # set +x
00:20:13.245   06:30:29	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:13.245   06:30:29	-- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:20:13.245   06:30:29	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:13.245   06:30:29	-- common/autotest_common.sh@10 -- # set +x
00:20:13.245  [2024-12-16 06:30:29.991426] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:20:13.245   06:30:29	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:13.245   06:30:29	-- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421
00:20:13.245   06:30:29	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:13.245   06:30:29	-- common/autotest_common.sh@10 -- # set +x
00:20:13.245  [2024-12-16 06:30:29.999296] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 ***
00:20:13.245   06:30:30	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:13.245   06:30:30	-- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1
00:20:13.245   06:30:30	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:13.245   06:30:30	-- common/autotest_common.sh@10 -- # set +x
00:20:13.245  Malloc1
00:20:13.245   06:30:30	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:13.246   06:30:30	-- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002
00:20:13.246   06:30:30	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:13.246   06:30:30	-- common/autotest_common.sh@10 -- # set +x
00:20:13.246   06:30:30	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:13.246   06:30:30	-- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1
00:20:13.246   06:30:30	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:13.246   06:30:30	-- common/autotest_common.sh@10 -- # set +x
00:20:13.246   06:30:30	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:13.246   06:30:30	-- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420
00:20:13.246   06:30:30	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:13.246   06:30:30	-- common/autotest_common.sh@10 -- # set +x
00:20:13.246   06:30:30	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:13.246   06:30:30	-- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421
00:20:13.246   06:30:30	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:13.246   06:30:30	-- common/autotest_common.sh@10 -- # set +x
00:20:13.246  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:20:13.246   06:30:30	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:13.246   06:30:30	-- host/multicontroller.sh@44 -- # bdevperf_pid=81957
00:20:13.246   06:30:30	-- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f
00:20:13.246   06:30:30	-- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:20:13.246   06:30:30	-- host/multicontroller.sh@47 -- # waitforlisten 81957 /var/tmp/bdevperf.sock
00:20:13.246   06:30:30	-- common/autotest_common.sh@829 -- # '[' -z 81957 ']'
00:20:13.246   06:30:30	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:20:13.246   06:30:30	-- common/autotest_common.sh@834 -- # local max_retries=100
00:20:13.246   06:30:30	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:20:13.246   06:30:30	-- common/autotest_common.sh@838 -- # xtrace_disable
00:20:13.246   06:30:30	-- common/autotest_common.sh@10 -- # set +x
00:20:14.184   06:30:31	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:20:14.184   06:30:31	-- common/autotest_common.sh@862 -- # return 0
00:20:14.184   06:30:31	-- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000
00:20:14.184   06:30:31	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:14.184   06:30:31	-- common/autotest_common.sh@10 -- # set +x
00:20:14.443  NVMe0n1
00:20:14.443   06:30:31	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:14.443   06:30:31	-- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:20:14.443   06:30:31	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:14.443   06:30:31	-- common/autotest_common.sh@10 -- # set +x
00:20:14.443   06:30:31	-- host/multicontroller.sh@54 -- # grep -c NVMe
00:20:14.443   06:30:31	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:14.443  1
00:20:14.443   06:30:31	-- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001
00:20:14.443   06:30:31	-- common/autotest_common.sh@650 -- # local es=0
00:20:14.443   06:30:31	-- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001
00:20:14.443   06:30:31	-- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:20:14.443   06:30:31	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:20:14.443    06:30:31	-- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:20:14.443   06:30:31	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:20:14.443   06:30:31	-- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001
00:20:14.443   06:30:31	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:14.443   06:30:31	-- common/autotest_common.sh@10 -- # set +x
00:20:14.443  2024/12/16 06:30:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path
00:20:14.443  request:
00:20:14.443  {
00:20:14.443  "method": "bdev_nvme_attach_controller",
00:20:14.444  "params": {
00:20:14.444  "name": "NVMe0",
00:20:14.444  "trtype": "tcp",
00:20:14.444  "traddr": "10.0.0.2",
00:20:14.444  "hostnqn": "nqn.2021-09-7.io.spdk:00001",
00:20:14.444  "hostaddr": "10.0.0.2",
00:20:14.444  "hostsvcid": "60000",
00:20:14.444  "adrfam": "ipv4",
00:20:14.444  "trsvcid": "4420",
00:20:14.444  "subnqn": "nqn.2016-06.io.spdk:cnode1"
00:20:14.444  }
00:20:14.444  }
00:20:14.444  Got JSON-RPC error response
00:20:14.444  GoRPCClient: error on JSON-RPC call
00:20:14.444   06:30:31	-- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:20:14.444   06:30:31	-- common/autotest_common.sh@653 -- # es=1
00:20:14.444   06:30:31	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:20:14.444   06:30:31	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:20:14.444   06:30:31	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:20:14.444   06:30:31	-- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000
00:20:14.444   06:30:31	-- common/autotest_common.sh@650 -- # local es=0
00:20:14.444   06:30:31	-- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000
00:20:14.444   06:30:31	-- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:20:14.444   06:30:31	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:20:14.444    06:30:31	-- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:20:14.444   06:30:31	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:20:14.444   06:30:31	-- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000
00:20:14.444   06:30:31	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:14.444   06:30:31	-- common/autotest_common.sh@10 -- # set +x
00:20:14.444  2024/12/16 06:30:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path
00:20:14.444  request:
00:20:14.444  {
00:20:14.444  "method": "bdev_nvme_attach_controller",
00:20:14.444  "params": {
00:20:14.444  "name": "NVMe0",
00:20:14.444  "trtype": "tcp",
00:20:14.444  "traddr": "10.0.0.2",
00:20:14.444  "hostaddr": "10.0.0.2",
00:20:14.444  "hostsvcid": "60000",
00:20:14.444  "adrfam": "ipv4",
00:20:14.444  "trsvcid": "4420",
00:20:14.444  "subnqn": "nqn.2016-06.io.spdk:cnode2"
00:20:14.444  }
00:20:14.444  }
00:20:14.444  Got JSON-RPC error response
00:20:14.444  GoRPCClient: error on JSON-RPC call
00:20:14.444   06:30:31	-- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:20:14.444   06:30:31	-- common/autotest_common.sh@653 -- # es=1
00:20:14.444   06:30:31	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:20:14.444   06:30:31	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:20:14.444   06:30:31	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:20:14.444   06:30:31	-- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable
00:20:14.444   06:30:31	-- common/autotest_common.sh@650 -- # local es=0
00:20:14.444   06:30:31	-- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable
00:20:14.444   06:30:31	-- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:20:14.444   06:30:31	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:20:14.444    06:30:31	-- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:20:14.444   06:30:31	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:20:14.444   06:30:31	-- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable
00:20:14.444   06:30:31	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:14.444   06:30:31	-- common/autotest_common.sh@10 -- # set +x
00:20:14.444  2024/12/16 06:30:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled
00:20:14.444  request:
00:20:14.444  {
00:20:14.444  "method": "bdev_nvme_attach_controller",
00:20:14.444  "params": {
00:20:14.444  "name": "NVMe0",
00:20:14.444  "trtype": "tcp",
00:20:14.444  "traddr": "10.0.0.2",
00:20:14.444  "hostaddr": "10.0.0.2",
00:20:14.444  "hostsvcid": "60000",
00:20:14.444  "adrfam": "ipv4",
00:20:14.444  "trsvcid": "4420",
00:20:14.444  "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:20:14.444  "multipath": "disable"
00:20:14.444  }
00:20:14.444  }
00:20:14.444  Got JSON-RPC error response
00:20:14.444  GoRPCClient: error on JSON-RPC call
00:20:14.444   06:30:31	-- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:20:14.444   06:30:31	-- common/autotest_common.sh@653 -- # es=1
00:20:14.444   06:30:31	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:20:14.444   06:30:31	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:20:14.444   06:30:31	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:20:14.444   06:30:31	-- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover
00:20:14.444   06:30:31	-- common/autotest_common.sh@650 -- # local es=0
00:20:14.444   06:30:31	-- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover
00:20:14.444   06:30:31	-- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:20:14.444   06:30:31	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:20:14.444    06:30:31	-- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:20:14.444   06:30:31	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:20:14.444   06:30:31	-- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover
00:20:14.444   06:30:31	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:14.444   06:30:31	-- common/autotest_common.sh@10 -- # set +x
00:20:14.444  2024/12/16 06:30:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path
00:20:14.444  request:
00:20:14.444  {
00:20:14.444  "method": "bdev_nvme_attach_controller",
00:20:14.444  "params": {
00:20:14.444  "name": "NVMe0",
00:20:14.444  "trtype": "tcp",
00:20:14.444  "traddr": "10.0.0.2",
00:20:14.444  "hostaddr": "10.0.0.2",
00:20:14.444  "hostsvcid": "60000",
00:20:14.444  "adrfam": "ipv4",
00:20:14.444  "trsvcid": "4420",
00:20:14.444  "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:20:14.444  "multipath": "failover"
00:20:14.444  }
00:20:14.444  }
00:20:14.444  Got JSON-RPC error response
00:20:14.444  GoRPCClient: error on JSON-RPC call
00:20:14.444   06:30:31	-- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:20:14.444   06:30:31	-- common/autotest_common.sh@653 -- # es=1
00:20:14.444   06:30:31	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:20:14.444   06:30:31	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:20:14.444   06:30:31	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:20:14.444   06:30:31	-- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:20:14.444   06:30:31	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:14.444   06:30:31	-- common/autotest_common.sh@10 -- # set +x
00:20:14.444  
00:20:14.444   06:30:31	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:14.444   06:30:31	-- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:20:14.444   06:30:31	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:14.444   06:30:31	-- common/autotest_common.sh@10 -- # set +x
00:20:14.444   06:30:31	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:14.444   06:30:31	-- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000
00:20:14.444   06:30:31	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:14.444   06:30:31	-- common/autotest_common.sh@10 -- # set +x
00:20:14.444  
00:20:14.444   06:30:31	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:14.444    06:30:31	-- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:20:14.444    06:30:31	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:14.444    06:30:31	-- host/multicontroller.sh@90 -- # grep -c NVMe
00:20:14.444    06:30:31	-- common/autotest_common.sh@10 -- # set +x
00:20:14.703    06:30:31	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:14.703   06:30:31	-- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']'
00:20:14.704   06:30:31	-- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:20:15.641  0
00:20:15.641   06:30:32	-- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1
00:20:15.641   06:30:32	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:15.641   06:30:32	-- common/autotest_common.sh@10 -- # set +x
00:20:15.641   06:30:32	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:15.641   06:30:32	-- host/multicontroller.sh@100 -- # killprocess 81957
00:20:15.641   06:30:32	-- common/autotest_common.sh@936 -- # '[' -z 81957 ']'
00:20:15.641   06:30:32	-- common/autotest_common.sh@940 -- # kill -0 81957
00:20:15.641    06:30:32	-- common/autotest_common.sh@941 -- # uname
00:20:15.641   06:30:32	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:20:15.641    06:30:32	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81957
00:20:15.900  killing process with pid 81957
00:20:15.900   06:30:32	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:20:15.900   06:30:32	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:20:15.900   06:30:32	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 81957'
00:20:15.900   06:30:32	-- common/autotest_common.sh@955 -- # kill 81957
00:20:15.900   06:30:32	-- common/autotest_common.sh@960 -- # wait 81957
00:20:15.900   06:30:32	-- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:20:15.900   06:30:32	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:15.900   06:30:32	-- common/autotest_common.sh@10 -- # set +x
00:20:16.160   06:30:32	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:16.160   06:30:32	-- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2
00:20:16.160   06:30:32	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:16.160   06:30:32	-- common/autotest_common.sh@10 -- # set +x
00:20:16.160   06:30:32	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:16.160   06:30:32	-- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT
00:20:16.160   06:30:32	-- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt
00:20:16.160   06:30:32	-- common/autotest_common.sh@1607 -- # read -r file
00:20:16.160    06:30:32	-- common/autotest_common.sh@1606 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f
00:20:16.160    06:30:32	-- common/autotest_common.sh@1606 -- # sort -u
00:20:16.160   06:30:32	-- common/autotest_common.sh@1608 -- # cat
00:20:16.160  --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt ---
00:20:16.160  [2024-12-16 06:30:30.119062] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:20:16.160  [2024-12-16 06:30:30.119174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81957 ]
00:20:16.160  [2024-12-16 06:30:30.250671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:16.160  [2024-12-16 06:30:30.343563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:20:16.160  [2024-12-16 06:30:31.406382] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 9e0f52d2-a8d9-428d-80b4-99219c506a21 already exists
00:20:16.160  [2024-12-16 06:30:31.406448] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:9e0f52d2-a8d9-428d-80b4-99219c506a21 alias for bdev NVMe1n1
00:20:16.160  [2024-12-16 06:30:31.406483] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed
00:20:16.160  Running I/O for 1 seconds...
00:20:16.160  
00:20:16.160                                                                                                  Latency(us)
00:20:16.160  
[2024-12-16T06:30:33.136Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:20:16.160  
[2024-12-16T06:30:33.136Z]  Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096)
00:20:16.160  	 NVMe0n1             :       1.00   24868.05      97.14       0.00     0.00    5135.14    2427.81   12868.89
00:20:16.160  
[2024-12-16T06:30:33.136Z]  ===================================================================================================================
00:20:16.160  
[2024-12-16T06:30:33.136Z]  Total                       :              24868.05      97.14       0.00     0.00    5135.14    2427.81   12868.89
00:20:16.160  Received shutdown signal, test time was about 1.000000 seconds
00:20:16.160  
00:20:16.160                                                                                                  Latency(us)
00:20:16.160  
[2024-12-16T06:30:33.136Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:20:16.160  
[2024-12-16T06:30:33.136Z]  ===================================================================================================================
00:20:16.160  
[2024-12-16T06:30:33.136Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:20:16.160  --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt ---
00:20:16.160   06:30:32	-- common/autotest_common.sh@1613 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt
00:20:16.160   06:30:32	-- common/autotest_common.sh@1607 -- # read -r file
00:20:16.160   06:30:32	-- host/multicontroller.sh@108 -- # nvmftestfini
00:20:16.160   06:30:32	-- nvmf/common.sh@476 -- # nvmfcleanup
00:20:16.160   06:30:32	-- nvmf/common.sh@116 -- # sync
00:20:16.160   06:30:32	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:20:16.160   06:30:32	-- nvmf/common.sh@119 -- # set +e
00:20:16.160   06:30:32	-- nvmf/common.sh@120 -- # for i in {1..20}
00:20:16.160   06:30:32	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:20:16.160  rmmod nvme_tcp
00:20:16.160  rmmod nvme_fabrics
00:20:16.160  rmmod nvme_keyring
00:20:16.160   06:30:33	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:20:16.160   06:30:33	-- nvmf/common.sh@123 -- # set -e
00:20:16.160   06:30:33	-- nvmf/common.sh@124 -- # return 0
00:20:16.160   06:30:33	-- nvmf/common.sh@477 -- # '[' -n 81905 ']'
00:20:16.160   06:30:33	-- nvmf/common.sh@478 -- # killprocess 81905
00:20:16.160   06:30:33	-- common/autotest_common.sh@936 -- # '[' -z 81905 ']'
00:20:16.160   06:30:33	-- common/autotest_common.sh@940 -- # kill -0 81905
00:20:16.160    06:30:33	-- common/autotest_common.sh@941 -- # uname
00:20:16.160   06:30:33	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:20:16.160    06:30:33	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81905
00:20:16.160  killing process with pid 81905
00:20:16.160   06:30:33	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:20:16.160   06:30:33	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:20:16.160   06:30:33	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 81905'
00:20:16.160   06:30:33	-- common/autotest_common.sh@955 -- # kill 81905
00:20:16.160   06:30:33	-- common/autotest_common.sh@960 -- # wait 81905
00:20:16.729   06:30:33	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:20:16.729   06:30:33	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:20:16.729   06:30:33	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:20:16.729   06:30:33	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:20:16.729   06:30:33	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:20:16.729   06:30:33	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:20:16.729   06:30:33	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:20:16.729    06:30:33	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:20:16.729   06:30:33	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:20:16.729  
00:20:16.729  real	0m5.198s
00:20:16.729  user	0m16.109s
00:20:16.729  sys	0m1.118s
00:20:16.729  ************************************
00:20:16.729  END TEST nvmf_multicontroller
00:20:16.729  ************************************
00:20:16.729   06:30:33	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:20:16.729   06:30:33	-- common/autotest_common.sh@10 -- # set +x
00:20:16.729   06:30:33	-- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp
00:20:16.729   06:30:33	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:20:16.729   06:30:33	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:20:16.729   06:30:33	-- common/autotest_common.sh@10 -- # set +x
00:20:16.729  ************************************
00:20:16.729  START TEST nvmf_aer
00:20:16.729  ************************************
00:20:16.729   06:30:33	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp
00:20:16.729  * Looking for test storage...
00:20:16.729  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:20:16.729    06:30:33	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:20:16.729     06:30:33	-- common/autotest_common.sh@1690 -- # lcov --version
00:20:16.729     06:30:33	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:20:16.729    06:30:33	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:20:16.729    06:30:33	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:20:16.729    06:30:33	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:20:16.729    06:30:33	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:20:16.729    06:30:33	-- scripts/common.sh@335 -- # IFS=.-:
00:20:16.729    06:30:33	-- scripts/common.sh@335 -- # read -ra ver1
00:20:16.729    06:30:33	-- scripts/common.sh@336 -- # IFS=.-:
00:20:16.729    06:30:33	-- scripts/common.sh@336 -- # read -ra ver2
00:20:16.729    06:30:33	-- scripts/common.sh@337 -- # local 'op=<'
00:20:16.729    06:30:33	-- scripts/common.sh@339 -- # ver1_l=2
00:20:16.729    06:30:33	-- scripts/common.sh@340 -- # ver2_l=1
00:20:16.729    06:30:33	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:20:16.729    06:30:33	-- scripts/common.sh@343 -- # case "$op" in
00:20:16.729    06:30:33	-- scripts/common.sh@344 -- # : 1
00:20:16.729    06:30:33	-- scripts/common.sh@363 -- # (( v = 0 ))
00:20:16.729    06:30:33	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:16.729     06:30:33	-- scripts/common.sh@364 -- # decimal 1
00:20:16.729     06:30:33	-- scripts/common.sh@352 -- # local d=1
00:20:16.729     06:30:33	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:16.729     06:30:33	-- scripts/common.sh@354 -- # echo 1
00:20:16.729    06:30:33	-- scripts/common.sh@364 -- # ver1[v]=1
00:20:16.730     06:30:33	-- scripts/common.sh@365 -- # decimal 2
00:20:16.730     06:30:33	-- scripts/common.sh@352 -- # local d=2
00:20:16.730     06:30:33	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:16.730     06:30:33	-- scripts/common.sh@354 -- # echo 2
00:20:16.730    06:30:33	-- scripts/common.sh@365 -- # ver2[v]=2
00:20:16.730    06:30:33	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:20:16.730    06:30:33	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:20:16.730    06:30:33	-- scripts/common.sh@367 -- # return 0
00:20:16.730    06:30:33	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:16.730    06:30:33	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:20:16.730  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:16.730  		--rc genhtml_branch_coverage=1
00:20:16.730  		--rc genhtml_function_coverage=1
00:20:16.730  		--rc genhtml_legend=1
00:20:16.730  		--rc geninfo_all_blocks=1
00:20:16.730  		--rc geninfo_unexecuted_blocks=1
00:20:16.730  		
00:20:16.730  		'
00:20:16.730    06:30:33	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:20:16.730  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:16.730  		--rc genhtml_branch_coverage=1
00:20:16.730  		--rc genhtml_function_coverage=1
00:20:16.730  		--rc genhtml_legend=1
00:20:16.730  		--rc geninfo_all_blocks=1
00:20:16.730  		--rc geninfo_unexecuted_blocks=1
00:20:16.730  		
00:20:16.730  		'
00:20:16.730    06:30:33	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:20:16.730  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:16.730  		--rc genhtml_branch_coverage=1
00:20:16.730  		--rc genhtml_function_coverage=1
00:20:16.730  		--rc genhtml_legend=1
00:20:16.730  		--rc geninfo_all_blocks=1
00:20:16.730  		--rc geninfo_unexecuted_blocks=1
00:20:16.730  		
00:20:16.730  		'
00:20:16.730    06:30:33	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:20:16.730  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:16.730  		--rc genhtml_branch_coverage=1
00:20:16.730  		--rc genhtml_function_coverage=1
00:20:16.730  		--rc genhtml_legend=1
00:20:16.730  		--rc geninfo_all_blocks=1
00:20:16.730  		--rc geninfo_unexecuted_blocks=1
00:20:16.730  		
00:20:16.730  		'
00:20:16.730   06:30:33	-- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:20:16.730     06:30:33	-- nvmf/common.sh@7 -- # uname -s
00:20:16.730    06:30:33	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:20:16.730    06:30:33	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:20:16.730    06:30:33	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:20:16.730    06:30:33	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:20:16.730    06:30:33	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:20:16.730    06:30:33	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:20:16.730    06:30:33	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:20:16.730    06:30:33	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:20:16.730    06:30:33	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:20:16.730     06:30:33	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:20:16.730    06:30:33	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:20:16.730    06:30:33	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:20:16.730    06:30:33	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:20:16.730    06:30:33	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:20:16.730    06:30:33	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:20:16.730    06:30:33	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:20:16.730     06:30:33	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:20:16.730     06:30:33	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:20:16.730     06:30:33	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:20:16.730      06:30:33	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:16.730      06:30:33	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:16.730      06:30:33	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:16.730      06:30:33	-- paths/export.sh@5 -- # export PATH
00:20:16.730      06:30:33	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:16.730    06:30:33	-- nvmf/common.sh@46 -- # : 0
00:20:16.730    06:30:33	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:20:16.730    06:30:33	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:20:16.730    06:30:33	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:20:16.730    06:30:33	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:20:16.730    06:30:33	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:20:16.730    06:30:33	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:20:16.730    06:30:33	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:20:16.730    06:30:33	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:20:16.730   06:30:33	-- host/aer.sh@11 -- # nvmftestinit
00:20:16.730   06:30:33	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:20:16.730   06:30:33	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:20:16.730   06:30:33	-- nvmf/common.sh@436 -- # prepare_net_devs
00:20:16.730   06:30:33	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:20:16.730   06:30:33	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:20:16.730   06:30:33	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:20:16.730   06:30:33	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:20:16.730    06:30:33	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:20:16.730   06:30:33	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:20:16.730   06:30:33	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:20:16.730   06:30:33	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:20:16.730   06:30:33	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:20:16.730   06:30:33	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:20:16.730   06:30:33	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:20:16.730   06:30:33	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:20:16.730   06:30:33	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:20:16.730   06:30:33	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:20:16.730   06:30:33	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:20:16.730   06:30:33	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:20:16.730   06:30:33	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:20:16.730   06:30:33	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:20:16.730   06:30:33	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:20:16.730   06:30:33	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:20:16.730   06:30:33	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:20:16.730   06:30:33	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:20:16.730   06:30:33	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:20:16.730   06:30:33	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:20:16.730   06:30:33	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:20:16.730  Cannot find device "nvmf_tgt_br"
00:20:16.730   06:30:33	-- nvmf/common.sh@154 -- # true
00:20:16.730   06:30:33	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:20:16.989  Cannot find device "nvmf_tgt_br2"
00:20:16.989   06:30:33	-- nvmf/common.sh@155 -- # true
00:20:16.989   06:30:33	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:20:16.989   06:30:33	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:20:16.989  Cannot find device "nvmf_tgt_br"
00:20:16.989   06:30:33	-- nvmf/common.sh@157 -- # true
00:20:16.989   06:30:33	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:20:16.990  Cannot find device "nvmf_tgt_br2"
00:20:16.990   06:30:33	-- nvmf/common.sh@158 -- # true
00:20:16.990   06:30:33	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:20:16.990   06:30:33	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:20:16.990   06:30:33	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:20:16.990  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:20:16.990   06:30:33	-- nvmf/common.sh@161 -- # true
00:20:16.990   06:30:33	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:20:16.990  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:20:16.990   06:30:33	-- nvmf/common.sh@162 -- # true
00:20:16.990   06:30:33	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:20:16.990   06:30:33	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:20:16.990   06:30:33	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:20:16.990   06:30:33	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:20:16.990   06:30:33	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:20:16.990   06:30:33	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:20:16.990   06:30:33	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:20:16.990   06:30:33	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:20:16.990   06:30:33	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:20:16.990   06:30:33	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:20:16.990   06:30:33	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:20:16.990   06:30:33	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:20:16.990   06:30:33	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:20:16.990   06:30:33	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:20:16.990   06:30:33	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:20:16.990   06:30:33	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:20:16.990   06:30:33	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:20:16.990   06:30:33	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:20:16.990   06:30:33	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:20:16.990   06:30:33	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:20:16.990   06:30:33	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:20:16.990   06:30:33	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:20:17.249   06:30:33	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:20:17.249   06:30:33	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:20:17.249  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:20:17.249  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms
00:20:17.249  
00:20:17.249  --- 10.0.0.2 ping statistics ---
00:20:17.249  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:17.249  rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms
00:20:17.249   06:30:33	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:20:17.249  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:20:17.249  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms
00:20:17.249  
00:20:17.249  --- 10.0.0.3 ping statistics ---
00:20:17.249  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:17.249  rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms
00:20:17.249   06:30:33	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:20:17.249  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:20:17.249  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms
00:20:17.249  
00:20:17.249  --- 10.0.0.1 ping statistics ---
00:20:17.249  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:17.249  rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms
00:20:17.249   06:30:33	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:20:17.249   06:30:33	-- nvmf/common.sh@421 -- # return 0
00:20:17.249   06:30:33	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:20:17.249   06:30:33	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:20:17.249   06:30:33	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:20:17.249   06:30:33	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:20:17.249   06:30:33	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:20:17.249   06:30:33	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:20:17.249   06:30:33	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:20:17.249   06:30:34	-- host/aer.sh@12 -- # nvmfappstart -m 0xF
00:20:17.249   06:30:34	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:20:17.249   06:30:34	-- common/autotest_common.sh@722 -- # xtrace_disable
00:20:17.249   06:30:34	-- common/autotest_common.sh@10 -- # set +x
00:20:17.249   06:30:34	-- nvmf/common.sh@469 -- # nvmfpid=82218
00:20:17.249   06:30:34	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:20:17.249   06:30:34	-- nvmf/common.sh@470 -- # waitforlisten 82218
00:20:17.249   06:30:34	-- common/autotest_common.sh@829 -- # '[' -z 82218 ']'
00:20:17.249   06:30:34	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:17.249  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:17.249   06:30:34	-- common/autotest_common.sh@834 -- # local max_retries=100
00:20:17.249   06:30:34	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:17.249   06:30:34	-- common/autotest_common.sh@838 -- # xtrace_disable
00:20:17.249   06:30:34	-- common/autotest_common.sh@10 -- # set +x
00:20:17.249  [2024-12-16 06:30:34.059595] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:20:17.249  [2024-12-16 06:30:34.059650] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:20:17.249  [2024-12-16 06:30:34.196036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:20:17.509  [2024-12-16 06:30:34.300653] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:20:17.509  [2024-12-16 06:30:34.301120] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:20:17.509  [2024-12-16 06:30:34.301284] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:20:17.509  [2024-12-16 06:30:34.301448] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:20:17.509  [2024-12-16 06:30:34.301730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:20:17.509  [2024-12-16 06:30:34.301819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:20:17.509  [2024-12-16 06:30:34.301889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:20:17.509  [2024-12-16 06:30:34.301893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:20:18.078   06:30:35	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:20:18.078   06:30:35	-- common/autotest_common.sh@862 -- # return 0
00:20:18.078   06:30:35	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:20:18.078   06:30:35	-- common/autotest_common.sh@728 -- # xtrace_disable
00:20:18.078   06:30:35	-- common/autotest_common.sh@10 -- # set +x
00:20:18.078   06:30:35	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:20:18.078   06:30:35	-- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:20:18.078   06:30:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:18.078   06:30:35	-- common/autotest_common.sh@10 -- # set +x
00:20:18.078  [2024-12-16 06:30:35.048698] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:20:18.345   06:30:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:18.345   06:30:35	-- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0
00:20:18.345   06:30:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:18.345   06:30:35	-- common/autotest_common.sh@10 -- # set +x
00:20:18.345  Malloc0
00:20:18.345   06:30:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:18.345   06:30:35	-- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2
00:20:18.345   06:30:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:18.345   06:30:35	-- common/autotest_common.sh@10 -- # set +x
00:20:18.345   06:30:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:18.345   06:30:35	-- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:20:18.345   06:30:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:18.345   06:30:35	-- common/autotest_common.sh@10 -- # set +x
00:20:18.345   06:30:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:18.345   06:30:35	-- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:20:18.345   06:30:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:18.345   06:30:35	-- common/autotest_common.sh@10 -- # set +x
00:20:18.345  [2024-12-16 06:30:35.114671] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:20:18.345   06:30:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:18.345   06:30:35	-- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems
00:20:18.345   06:30:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:18.345   06:30:35	-- common/autotest_common.sh@10 -- # set +x
00:20:18.345  [2024-12-16 06:30:35.122434] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05
00:20:18.345  [
00:20:18.345  {
00:20:18.345  "allow_any_host": true,
00:20:18.345  "hosts": [],
00:20:18.345  "listen_addresses": [],
00:20:18.345  "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:20:18.345  "subtype": "Discovery"
00:20:18.345  },
00:20:18.345  {
00:20:18.345  "allow_any_host": true,
00:20:18.345  "hosts": [],
00:20:18.345  "listen_addresses": [
00:20:18.345  {
00:20:18.345  "adrfam": "IPv4",
00:20:18.345  "traddr": "10.0.0.2",
00:20:18.345  "transport": "TCP",
00:20:18.345  "trsvcid": "4420",
00:20:18.346  "trtype": "TCP"
00:20:18.346  }
00:20:18.346  ],
00:20:18.346  "max_cntlid": 65519,
00:20:18.346  "max_namespaces": 2,
00:20:18.346  "min_cntlid": 1,
00:20:18.346  "model_number": "SPDK bdev Controller",
00:20:18.346  "namespaces": [
00:20:18.346  {
00:20:18.346  "bdev_name": "Malloc0",
00:20:18.346  "name": "Malloc0",
00:20:18.346  "nguid": "179850AFE7304D87B242992B2D00FC02",
00:20:18.346  "nsid": 1,
00:20:18.346  "uuid": "179850af-e730-4d87-b242-992b2d00fc02"
00:20:18.346  }
00:20:18.346  ],
00:20:18.346  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:20:18.346  "serial_number": "SPDK00000000000001",
00:20:18.346  "subtype": "NVMe"
00:20:18.346  }
00:20:18.346  ]
00:20:18.346   06:30:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:18.346   06:30:35	-- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file
00:20:18.346   06:30:35	-- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file
00:20:18.346   06:30:35	-- host/aer.sh@33 -- # aerpid=82271
00:20:18.346   06:30:35	-- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file
00:20:18.346   06:30:35	-- common/autotest_common.sh@1254 -- # local i=0
00:20:18.346   06:30:35	-- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:20:18.346   06:30:35	-- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r '        trtype:tcp         adrfam:IPv4         traddr:10.0.0.2         trsvcid:4420         subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file
00:20:18.346   06:30:35	-- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']'
00:20:18.346   06:30:35	-- common/autotest_common.sh@1257 -- # i=1
00:20:18.346   06:30:35	-- common/autotest_common.sh@1258 -- # sleep 0.1
00:20:18.346   06:30:35	-- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:20:18.346   06:30:35	-- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']'
00:20:18.346   06:30:35	-- common/autotest_common.sh@1257 -- # i=2
00:20:18.346   06:30:35	-- common/autotest_common.sh@1258 -- # sleep 0.1
00:20:18.606   06:30:35	-- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:20:18.606   06:30:35	-- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:20:18.606   06:30:35	-- common/autotest_common.sh@1265 -- # return 0
00:20:18.606   06:30:35	-- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1
00:20:18.606   06:30:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:18.606   06:30:35	-- common/autotest_common.sh@10 -- # set +x
00:20:18.606  Malloc1
00:20:18.606   06:30:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:18.606   06:30:35	-- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2
00:20:18.606   06:30:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:18.606   06:30:35	-- common/autotest_common.sh@10 -- # set +x
00:20:18.606   06:30:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:18.606   06:30:35	-- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems
00:20:18.606   06:30:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:18.606   06:30:35	-- common/autotest_common.sh@10 -- # set +x
00:20:18.606  Asynchronous Event Request test
00:20:18.606  Attaching to 10.0.0.2
00:20:18.606  Attached to 10.0.0.2
00:20:18.606  Registering asynchronous event callbacks...
00:20:18.606  Starting namespace attribute notice tests for all controllers...
00:20:18.606  10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00
00:20:18.606  aer_cb - Changed Namespace
00:20:18.606  Cleaning up...
00:20:18.606  [
00:20:18.606  {
00:20:18.606  "allow_any_host": true,
00:20:18.606  "hosts": [],
00:20:18.606  "listen_addresses": [],
00:20:18.606  "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:20:18.606  "subtype": "Discovery"
00:20:18.606  },
00:20:18.606  {
00:20:18.606  "allow_any_host": true,
00:20:18.606  "hosts": [],
00:20:18.606  "listen_addresses": [
00:20:18.606  {
00:20:18.606  "adrfam": "IPv4",
00:20:18.606  "traddr": "10.0.0.2",
00:20:18.606  "transport": "TCP",
00:20:18.606  "trsvcid": "4420",
00:20:18.606  "trtype": "TCP"
00:20:18.606  }
00:20:18.606  ],
00:20:18.606  "max_cntlid": 65519,
00:20:18.606  "max_namespaces": 2,
00:20:18.606  "min_cntlid": 1,
00:20:18.606  "model_number": "SPDK bdev Controller",
00:20:18.606  "namespaces": [
00:20:18.606  {
00:20:18.606  "bdev_name": "Malloc0",
00:20:18.606  "name": "Malloc0",
00:20:18.606  "nguid": "179850AFE7304D87B242992B2D00FC02",
00:20:18.606  "nsid": 1,
00:20:18.606  "uuid": "179850af-e730-4d87-b242-992b2d00fc02"
00:20:18.606  },
00:20:18.606  {
00:20:18.606  "bdev_name": "Malloc1",
00:20:18.606  "name": "Malloc1",
00:20:18.606  "nguid": "A7D323A05C904F98ACEC5F384C44D137",
00:20:18.606  "nsid": 2,
00:20:18.606  "uuid": "a7d323a0-5c90-4f98-acec-5f384c44d137"
00:20:18.606  }
00:20:18.606  ],
00:20:18.606  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:20:18.606  "serial_number": "SPDK00000000000001",
00:20:18.606  "subtype": "NVMe"
00:20:18.606  }
00:20:18.606  ]
00:20:18.606   06:30:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:18.606   06:30:35	-- host/aer.sh@43 -- # wait 82271
00:20:18.606   06:30:35	-- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0
00:20:18.606   06:30:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:18.606   06:30:35	-- common/autotest_common.sh@10 -- # set +x
00:20:18.606   06:30:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:18.606   06:30:35	-- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1
00:20:18.606   06:30:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:18.606   06:30:35	-- common/autotest_common.sh@10 -- # set +x
00:20:18.606   06:30:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:18.606   06:30:35	-- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:20:18.606   06:30:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:18.606   06:30:35	-- common/autotest_common.sh@10 -- # set +x
00:20:18.606   06:30:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:18.606   06:30:35	-- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT
00:20:18.606   06:30:35	-- host/aer.sh@51 -- # nvmftestfini
00:20:18.606   06:30:35	-- nvmf/common.sh@476 -- # nvmfcleanup
00:20:18.606   06:30:35	-- nvmf/common.sh@116 -- # sync
00:20:18.606   06:30:35	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:20:18.606   06:30:35	-- nvmf/common.sh@119 -- # set +e
00:20:18.606   06:30:35	-- nvmf/common.sh@120 -- # for i in {1..20}
00:20:18.606   06:30:35	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:20:18.607  rmmod nvme_tcp
00:20:18.607  rmmod nvme_fabrics
00:20:18.866  rmmod nvme_keyring
00:20:18.866   06:30:35	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:20:18.866   06:30:35	-- nvmf/common.sh@123 -- # set -e
00:20:18.866   06:30:35	-- nvmf/common.sh@124 -- # return 0
00:20:18.866   06:30:35	-- nvmf/common.sh@477 -- # '[' -n 82218 ']'
00:20:18.866   06:30:35	-- nvmf/common.sh@478 -- # killprocess 82218
00:20:18.866   06:30:35	-- common/autotest_common.sh@936 -- # '[' -z 82218 ']'
00:20:18.866   06:30:35	-- common/autotest_common.sh@940 -- # kill -0 82218
00:20:18.866    06:30:35	-- common/autotest_common.sh@941 -- # uname
00:20:18.866   06:30:35	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:20:18.866    06:30:35	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82218
00:20:18.866  killing process with pid 82218
00:20:18.866   06:30:35	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:20:18.866   06:30:35	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:20:18.866   06:30:35	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 82218'
00:20:18.866   06:30:35	-- common/autotest_common.sh@955 -- # kill 82218
00:20:18.866  [2024-12-16 06:30:35.646457] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times
00:20:18.866   06:30:35	-- common/autotest_common.sh@960 -- # wait 82218
00:20:19.125   06:30:35	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:20:19.125   06:30:35	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:20:19.125   06:30:35	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:20:19.125   06:30:35	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:20:19.125   06:30:35	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:20:19.125   06:30:35	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:20:19.125   06:30:35	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:20:19.125    06:30:35	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:20:19.125   06:30:35	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:20:19.125  
00:20:19.125  real	0m2.426s
00:20:19.125  user	0m6.464s
00:20:19.125  sys	0m0.679s
00:20:19.125   06:30:35	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:20:19.125  ************************************
00:20:19.125   06:30:35	-- common/autotest_common.sh@10 -- # set +x
00:20:19.125  END TEST nvmf_aer
00:20:19.125  ************************************
00:20:19.125   06:30:35	-- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp
00:20:19.125   06:30:35	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:20:19.125   06:30:35	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:20:19.125   06:30:35	-- common/autotest_common.sh@10 -- # set +x
00:20:19.125  ************************************
00:20:19.125  START TEST nvmf_async_init
00:20:19.125  ************************************
00:20:19.125   06:30:35	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp
00:20:19.125  * Looking for test storage...
00:20:19.125  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:20:19.125    06:30:36	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:20:19.125     06:30:36	-- common/autotest_common.sh@1690 -- # lcov --version
00:20:19.125     06:30:36	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:20:19.384    06:30:36	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:20:19.384    06:30:36	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:20:19.384    06:30:36	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:20:19.384    06:30:36	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:20:19.384    06:30:36	-- scripts/common.sh@335 -- # IFS=.-:
00:20:19.384    06:30:36	-- scripts/common.sh@335 -- # read -ra ver1
00:20:19.384    06:30:36	-- scripts/common.sh@336 -- # IFS=.-:
00:20:19.384    06:30:36	-- scripts/common.sh@336 -- # read -ra ver2
00:20:19.384    06:30:36	-- scripts/common.sh@337 -- # local 'op=<'
00:20:19.384    06:30:36	-- scripts/common.sh@339 -- # ver1_l=2
00:20:19.384    06:30:36	-- scripts/common.sh@340 -- # ver2_l=1
00:20:19.385    06:30:36	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:20:19.385    06:30:36	-- scripts/common.sh@343 -- # case "$op" in
00:20:19.385    06:30:36	-- scripts/common.sh@344 -- # : 1
00:20:19.385    06:30:36	-- scripts/common.sh@363 -- # (( v = 0 ))
00:20:19.385    06:30:36	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:19.385     06:30:36	-- scripts/common.sh@364 -- # decimal 1
00:20:19.385     06:30:36	-- scripts/common.sh@352 -- # local d=1
00:20:19.385     06:30:36	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:19.385     06:30:36	-- scripts/common.sh@354 -- # echo 1
00:20:19.385    06:30:36	-- scripts/common.sh@364 -- # ver1[v]=1
00:20:19.385     06:30:36	-- scripts/common.sh@365 -- # decimal 2
00:20:19.385     06:30:36	-- scripts/common.sh@352 -- # local d=2
00:20:19.385     06:30:36	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:19.385     06:30:36	-- scripts/common.sh@354 -- # echo 2
00:20:19.385    06:30:36	-- scripts/common.sh@365 -- # ver2[v]=2
00:20:19.385    06:30:36	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:20:19.385    06:30:36	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:20:19.385    06:30:36	-- scripts/common.sh@367 -- # return 0
00:20:19.385    06:30:36	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:19.385    06:30:36	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:20:19.385  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:19.385  		--rc genhtml_branch_coverage=1
00:20:19.385  		--rc genhtml_function_coverage=1
00:20:19.385  		--rc genhtml_legend=1
00:20:19.385  		--rc geninfo_all_blocks=1
00:20:19.385  		--rc geninfo_unexecuted_blocks=1
00:20:19.385  		
00:20:19.385  		'
00:20:19.385    06:30:36	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:20:19.385  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:19.385  		--rc genhtml_branch_coverage=1
00:20:19.385  		--rc genhtml_function_coverage=1
00:20:19.385  		--rc genhtml_legend=1
00:20:19.385  		--rc geninfo_all_blocks=1
00:20:19.385  		--rc geninfo_unexecuted_blocks=1
00:20:19.385  		
00:20:19.385  		'
00:20:19.385    06:30:36	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:20:19.385  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:19.385  		--rc genhtml_branch_coverage=1
00:20:19.385  		--rc genhtml_function_coverage=1
00:20:19.385  		--rc genhtml_legend=1
00:20:19.385  		--rc geninfo_all_blocks=1
00:20:19.385  		--rc geninfo_unexecuted_blocks=1
00:20:19.385  		
00:20:19.385  		'
00:20:19.385    06:30:36	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:20:19.385  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:19.385  		--rc genhtml_branch_coverage=1
00:20:19.385  		--rc genhtml_function_coverage=1
00:20:19.385  		--rc genhtml_legend=1
00:20:19.385  		--rc geninfo_all_blocks=1
00:20:19.385  		--rc geninfo_unexecuted_blocks=1
00:20:19.385  		
00:20:19.385  		'
00:20:19.385   06:30:36	-- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:20:19.385     06:30:36	-- nvmf/common.sh@7 -- # uname -s
00:20:19.385    06:30:36	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:20:19.385    06:30:36	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:20:19.385    06:30:36	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:20:19.385    06:30:36	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:20:19.385    06:30:36	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:20:19.385    06:30:36	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:20:19.385    06:30:36	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:20:19.385    06:30:36	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:20:19.385    06:30:36	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:20:19.385     06:30:36	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:20:19.385    06:30:36	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:20:19.385    06:30:36	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:20:19.385    06:30:36	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:20:19.385    06:30:36	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:20:19.385    06:30:36	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:20:19.385    06:30:36	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:20:19.385     06:30:36	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:20:19.385     06:30:36	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:20:19.385     06:30:36	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:20:19.385      06:30:36	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:19.385      06:30:36	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:19.385      06:30:36	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:19.385      06:30:36	-- paths/export.sh@5 -- # export PATH
00:20:19.385      06:30:36	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:19.385    06:30:36	-- nvmf/common.sh@46 -- # : 0
00:20:19.385    06:30:36	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:20:19.385    06:30:36	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:20:19.385    06:30:36	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:20:19.385    06:30:36	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:20:19.385    06:30:36	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:20:19.385    06:30:36	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:20:19.385    06:30:36	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:20:19.385    06:30:36	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:20:19.385   06:30:36	-- host/async_init.sh@13 -- # null_bdev_size=1024
00:20:19.385   06:30:36	-- host/async_init.sh@14 -- # null_block_size=512
00:20:19.385   06:30:36	-- host/async_init.sh@15 -- # null_bdev=null0
00:20:19.385   06:30:36	-- host/async_init.sh@16 -- # nvme_bdev=nvme0
00:20:19.385    06:30:36	-- host/async_init.sh@20 -- # uuidgen
00:20:19.385    06:30:36	-- host/async_init.sh@20 -- # tr -d -
00:20:19.385   06:30:36	-- host/async_init.sh@20 -- # nguid=136745715386408d91952d4c7c3ed7ef
00:20:19.385   06:30:36	-- host/async_init.sh@22 -- # nvmftestinit
00:20:19.385   06:30:36	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:20:19.385   06:30:36	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:20:19.385   06:30:36	-- nvmf/common.sh@436 -- # prepare_net_devs
00:20:19.385   06:30:36	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:20:19.385   06:30:36	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:20:19.385   06:30:36	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:20:19.385   06:30:36	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:20:19.385    06:30:36	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:20:19.385   06:30:36	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:20:19.385   06:30:36	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:20:19.385   06:30:36	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:20:19.385   06:30:36	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:20:19.385   06:30:36	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:20:19.385   06:30:36	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:20:19.385   06:30:36	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:20:19.385   06:30:36	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:20:19.385   06:30:36	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:20:19.385   06:30:36	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:20:19.385   06:30:36	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:20:19.385   06:30:36	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:20:19.385   06:30:36	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:20:19.385   06:30:36	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:20:19.385   06:30:36	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:20:19.385   06:30:36	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:20:19.385   06:30:36	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:20:19.385   06:30:36	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:20:19.385   06:30:36	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:20:19.385   06:30:36	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:20:19.385  Cannot find device "nvmf_tgt_br"
00:20:19.385   06:30:36	-- nvmf/common.sh@154 -- # true
00:20:19.385   06:30:36	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:20:19.385  Cannot find device "nvmf_tgt_br2"
00:20:19.386   06:30:36	-- nvmf/common.sh@155 -- # true
00:20:19.386   06:30:36	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:20:19.386   06:30:36	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:20:19.386  Cannot find device "nvmf_tgt_br"
00:20:19.386   06:30:36	-- nvmf/common.sh@157 -- # true
00:20:19.386   06:30:36	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:20:19.386  Cannot find device "nvmf_tgt_br2"
00:20:19.386   06:30:36	-- nvmf/common.sh@158 -- # true
00:20:19.386   06:30:36	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:20:19.386   06:30:36	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:20:19.386   06:30:36	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:20:19.386  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:20:19.386   06:30:36	-- nvmf/common.sh@161 -- # true
00:20:19.386   06:30:36	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:20:19.386  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:20:19.386   06:30:36	-- nvmf/common.sh@162 -- # true
00:20:19.386   06:30:36	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:20:19.386   06:30:36	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:20:19.386   06:30:36	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:20:19.386   06:30:36	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:20:19.386   06:30:36	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:20:19.386   06:30:36	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:20:19.645   06:30:36	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:20:19.645   06:30:36	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:20:19.645   06:30:36	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:20:19.645   06:30:36	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:20:19.645   06:30:36	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:20:19.645   06:30:36	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:20:19.645   06:30:36	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:20:19.645   06:30:36	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:20:19.645   06:30:36	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:20:19.645   06:30:36	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:20:19.645   06:30:36	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:20:19.645   06:30:36	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:20:19.645   06:30:36	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:20:19.645   06:30:36	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:20:19.645   06:30:36	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:20:19.645   06:30:36	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:20:19.645   06:30:36	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:20:19.645   06:30:36	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:20:19.645  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:20:19.645  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms
00:20:19.645  
00:20:19.645  --- 10.0.0.2 ping statistics ---
00:20:19.645  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:19.645  rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms
00:20:19.645   06:30:36	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:20:19.645  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:20:19.645  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms
00:20:19.645  
00:20:19.645  --- 10.0.0.3 ping statistics ---
00:20:19.645  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:19.645  rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms
00:20:19.645   06:30:36	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:20:19.645  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:20:19.645  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms
00:20:19.645  
00:20:19.645  --- 10.0.0.1 ping statistics ---
00:20:19.645  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:19.645  rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms
00:20:19.645   06:30:36	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:20:19.645   06:30:36	-- nvmf/common.sh@421 -- # return 0
00:20:19.645   06:30:36	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:20:19.645   06:30:36	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:20:19.645   06:30:36	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:20:19.645   06:30:36	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:20:19.645   06:30:36	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:20:19.645   06:30:36	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:20:19.645   06:30:36	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:20:19.645   06:30:36	-- host/async_init.sh@23 -- # nvmfappstart -m 0x1
00:20:19.645   06:30:36	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:20:19.645   06:30:36	-- common/autotest_common.sh@722 -- # xtrace_disable
00:20:19.645   06:30:36	-- common/autotest_common.sh@10 -- # set +x
00:20:19.645   06:30:36	-- nvmf/common.sh@469 -- # nvmfpid=82448
00:20:19.645   06:30:36	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1
00:20:19.645   06:30:36	-- nvmf/common.sh@470 -- # waitforlisten 82448
00:20:19.645   06:30:36	-- common/autotest_common.sh@829 -- # '[' -z 82448 ']'
00:20:19.645   06:30:36	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:19.645   06:30:36	-- common/autotest_common.sh@834 -- # local max_retries=100
00:20:19.645  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:19.645   06:30:36	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:19.645   06:30:36	-- common/autotest_common.sh@838 -- # xtrace_disable
00:20:19.645   06:30:36	-- common/autotest_common.sh@10 -- # set +x
00:20:19.645  [2024-12-16 06:30:36.577453] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:20:19.645  [2024-12-16 06:30:36.577543] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:20:19.904  [2024-12-16 06:30:36.717309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:19.904  [2024-12-16 06:30:36.791093] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:20:19.904  [2024-12-16 06:30:36.791232] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:20:19.904  [2024-12-16 06:30:36.791245] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:20:19.904  [2024-12-16 06:30:36.791254] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:20:19.904  [2024-12-16 06:30:36.791283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:20:20.862   06:30:37	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:20:20.862   06:30:37	-- common/autotest_common.sh@862 -- # return 0
00:20:20.862   06:30:37	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:20:20.862   06:30:37	-- common/autotest_common.sh@728 -- # xtrace_disable
00:20:20.862   06:30:37	-- common/autotest_common.sh@10 -- # set +x
00:20:20.862   06:30:37	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:20:20.862   06:30:37	-- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o
00:20:20.863   06:30:37	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:20.863   06:30:37	-- common/autotest_common.sh@10 -- # set +x
00:20:20.863  [2024-12-16 06:30:37.568099] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:20:20.863   06:30:37	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:20.863   06:30:37	-- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512
00:20:20.863   06:30:37	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:20.863   06:30:37	-- common/autotest_common.sh@10 -- # set +x
00:20:20.863  null0
00:20:20.863   06:30:37	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:20.863   06:30:37	-- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine
00:20:20.863   06:30:37	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:20.863   06:30:37	-- common/autotest_common.sh@10 -- # set +x
00:20:20.863   06:30:37	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:20.863   06:30:37	-- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a
00:20:20.863   06:30:37	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:20.863   06:30:37	-- common/autotest_common.sh@10 -- # set +x
00:20:20.863   06:30:37	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:20.863   06:30:37	-- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 136745715386408d91952d4c7c3ed7ef
00:20:20.863   06:30:37	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:20.863   06:30:37	-- common/autotest_common.sh@10 -- # set +x
00:20:20.863   06:30:37	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:20.863   06:30:37	-- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:20:20.863   06:30:37	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:20.863   06:30:37	-- common/autotest_common.sh@10 -- # set +x
00:20:20.863  [2024-12-16 06:30:37.608200] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:20:20.863   06:30:37	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:20.863   06:30:37	-- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0
00:20:20.863   06:30:37	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:20.863   06:30:37	-- common/autotest_common.sh@10 -- # set +x
00:20:21.171  nvme0n1
00:20:21.171   06:30:37	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:21.171   06:30:37	-- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1
00:20:21.171   06:30:37	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:21.171   06:30:37	-- common/autotest_common.sh@10 -- # set +x
00:20:21.171  [
00:20:21.171  {
00:20:21.171  "aliases": [
00:20:21.171  "13674571-5386-408d-9195-2d4c7c3ed7ef"
00:20:21.171  ],
00:20:21.171  "assigned_rate_limits": {
00:20:21.171  "r_mbytes_per_sec": 0,
00:20:21.171  "rw_ios_per_sec": 0,
00:20:21.171  "rw_mbytes_per_sec": 0,
00:20:21.171  "w_mbytes_per_sec": 0
00:20:21.171  },
00:20:21.171  "block_size": 512,
00:20:21.171  "claimed": false,
00:20:21.171  "driver_specific": {
00:20:21.171  "mp_policy": "active_passive",
00:20:21.171  "nvme": [
00:20:21.171  {
00:20:21.171  "ctrlr_data": {
00:20:21.171  "ana_reporting": false,
00:20:21.171  "cntlid": 1,
00:20:21.171  "firmware_revision": "24.01.1",
00:20:21.171  "model_number": "SPDK bdev Controller",
00:20:21.171  "multi_ctrlr": true,
00:20:21.171  "oacs": {
00:20:21.171  "firmware": 0,
00:20:21.171  "format": 0,
00:20:21.171  "ns_manage": 0,
00:20:21.171  "security": 0
00:20:21.171  },
00:20:21.171  "serial_number": "00000000000000000000",
00:20:21.171  "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:20:21.171  "vendor_id": "0x8086"
00:20:21.171  },
00:20:21.171  "ns_data": {
00:20:21.171  "can_share": true,
00:20:21.171  "id": 1
00:20:21.171  },
00:20:21.171  "trid": {
00:20:21.171  "adrfam": "IPv4",
00:20:21.171  "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:20:21.171  "traddr": "10.0.0.2",
00:20:21.171  "trsvcid": "4420",
00:20:21.171  "trtype": "TCP"
00:20:21.171  },
00:20:21.171  "vs": {
00:20:21.171  "nvme_version": "1.3"
00:20:21.171  }
00:20:21.171  }
00:20:21.171  ]
00:20:21.171  },
00:20:21.171  "name": "nvme0n1",
00:20:21.171  "num_blocks": 2097152,
00:20:21.171  "product_name": "NVMe disk",
00:20:21.171  "supported_io_types": {
00:20:21.171  "abort": true,
00:20:21.171  "compare": true,
00:20:21.171  "compare_and_write": true,
00:20:21.171  "flush": true,
00:20:21.171  "nvme_admin": true,
00:20:21.171  "nvme_io": true,
00:20:21.171  "read": true,
00:20:21.171  "reset": true,
00:20:21.171  "unmap": false,
00:20:21.171  "write": true,
00:20:21.171  "write_zeroes": true
00:20:21.171  },
00:20:21.171  "uuid": "13674571-5386-408d-9195-2d4c7c3ed7ef",
00:20:21.171  "zoned": false
00:20:21.171  }
00:20:21.171  ]
00:20:21.171   06:30:37	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:21.171   06:30:37	-- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0
00:20:21.171   06:30:37	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:21.171   06:30:37	-- common/autotest_common.sh@10 -- # set +x
00:20:21.171  [2024-12-16 06:30:37.864325] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller
00:20:21.171  [2024-12-16 06:30:37.864399] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8df90 (9): Bad file descriptor
00:20:21.171  [2024-12-16 06:30:37.996599] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:20:21.171   06:30:37	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:21.171   06:30:37	-- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1
00:20:21.171   06:30:37	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:21.171   06:30:38	-- common/autotest_common.sh@10 -- # set +x
00:20:21.171  [
00:20:21.171  {
00:20:21.171  "aliases": [
00:20:21.171  "13674571-5386-408d-9195-2d4c7c3ed7ef"
00:20:21.171  ],
00:20:21.171  "assigned_rate_limits": {
00:20:21.171  "r_mbytes_per_sec": 0,
00:20:21.171  "rw_ios_per_sec": 0,
00:20:21.171  "rw_mbytes_per_sec": 0,
00:20:21.171  "w_mbytes_per_sec": 0
00:20:21.171  },
00:20:21.171  "block_size": 512,
00:20:21.171  "claimed": false,
00:20:21.171  "driver_specific": {
00:20:21.171  "mp_policy": "active_passive",
00:20:21.171  "nvme": [
00:20:21.171  {
00:20:21.171  "ctrlr_data": {
00:20:21.171  "ana_reporting": false,
00:20:21.171  "cntlid": 2,
00:20:21.171  "firmware_revision": "24.01.1",
00:20:21.171  "model_number": "SPDK bdev Controller",
00:20:21.171  "multi_ctrlr": true,
00:20:21.171  "oacs": {
00:20:21.171  "firmware": 0,
00:20:21.171  "format": 0,
00:20:21.171  "ns_manage": 0,
00:20:21.171  "security": 0
00:20:21.171  },
00:20:21.171  "serial_number": "00000000000000000000",
00:20:21.171  "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:20:21.171  "vendor_id": "0x8086"
00:20:21.171  },
00:20:21.171  "ns_data": {
00:20:21.171  "can_share": true,
00:20:21.171  "id": 1
00:20:21.171  },
00:20:21.171  "trid": {
00:20:21.171  "adrfam": "IPv4",
00:20:21.171  "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:20:21.171  "traddr": "10.0.0.2",
00:20:21.171  "trsvcid": "4420",
00:20:21.172  "trtype": "TCP"
00:20:21.172  },
00:20:21.172  "vs": {
00:20:21.172  "nvme_version": "1.3"
00:20:21.172  }
00:20:21.172  }
00:20:21.172  ]
00:20:21.172  },
00:20:21.172  "name": "nvme0n1",
00:20:21.172  "num_blocks": 2097152,
00:20:21.172  "product_name": "NVMe disk",
00:20:21.172  "supported_io_types": {
00:20:21.172  "abort": true,
00:20:21.172  "compare": true,
00:20:21.172  "compare_and_write": true,
00:20:21.172  "flush": true,
00:20:21.172  "nvme_admin": true,
00:20:21.172  "nvme_io": true,
00:20:21.172  "read": true,
00:20:21.172  "reset": true,
00:20:21.172  "unmap": false,
00:20:21.172  "write": true,
00:20:21.172  "write_zeroes": true
00:20:21.172  },
00:20:21.172  "uuid": "13674571-5386-408d-9195-2d4c7c3ed7ef",
00:20:21.172  "zoned": false
00:20:21.172  }
00:20:21.172  ]
00:20:21.172   06:30:38	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:21.172   06:30:38	-- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:20:21.172   06:30:38	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:21.172   06:30:38	-- common/autotest_common.sh@10 -- # set +x
00:20:21.172   06:30:38	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:21.172    06:30:38	-- host/async_init.sh@53 -- # mktemp
00:20:21.172   06:30:38	-- host/async_init.sh@53 -- # key_path=/tmp/tmp.W37dRt0ErW
00:20:21.172   06:30:38	-- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ:
00:20:21.172   06:30:38	-- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.W37dRt0ErW
00:20:21.172   06:30:38	-- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable
00:20:21.172   06:30:38	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:21.172   06:30:38	-- common/autotest_common.sh@10 -- # set +x
00:20:21.172   06:30:38	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:21.172   06:30:38	-- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel
00:20:21.172   06:30:38	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:21.172   06:30:38	-- common/autotest_common.sh@10 -- # set +x
00:20:21.172  [2024-12-16 06:30:38.056434] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:20:21.172  [2024-12-16 06:30:38.056586] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 ***
00:20:21.172   06:30:38	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:21.172   06:30:38	-- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.W37dRt0ErW
00:20:21.172   06:30:38	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:21.172   06:30:38	-- common/autotest_common.sh@10 -- # set +x
00:20:21.172   06:30:38	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:21.172   06:30:38	-- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.W37dRt0ErW
00:20:21.172   06:30:38	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:21.172   06:30:38	-- common/autotest_common.sh@10 -- # set +x
00:20:21.172  [2024-12-16 06:30:38.072431] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:20:21.444  nvme0n1
00:20:21.444   06:30:38	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:21.444   06:30:38	-- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1
00:20:21.444   06:30:38	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:21.444   06:30:38	-- common/autotest_common.sh@10 -- # set +x
00:20:21.444  [
00:20:21.444  {
00:20:21.444  "aliases": [
00:20:21.444  "13674571-5386-408d-9195-2d4c7c3ed7ef"
00:20:21.444  ],
00:20:21.444  "assigned_rate_limits": {
00:20:21.444  "r_mbytes_per_sec": 0,
00:20:21.444  "rw_ios_per_sec": 0,
00:20:21.444  "rw_mbytes_per_sec": 0,
00:20:21.444  "w_mbytes_per_sec": 0
00:20:21.444  },
00:20:21.444  "block_size": 512,
00:20:21.444  "claimed": false,
00:20:21.444  "driver_specific": {
00:20:21.444  "mp_policy": "active_passive",
00:20:21.444  "nvme": [
00:20:21.444  {
00:20:21.444  "ctrlr_data": {
00:20:21.444  "ana_reporting": false,
00:20:21.444  "cntlid": 3,
00:20:21.444  "firmware_revision": "24.01.1",
00:20:21.444  "model_number": "SPDK bdev Controller",
00:20:21.444  "multi_ctrlr": true,
00:20:21.444  "oacs": {
00:20:21.444  "firmware": 0,
00:20:21.444  "format": 0,
00:20:21.444  "ns_manage": 0,
00:20:21.444  "security": 0
00:20:21.444  },
00:20:21.445  "serial_number": "00000000000000000000",
00:20:21.445  "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:20:21.445  "vendor_id": "0x8086"
00:20:21.445  },
00:20:21.445  "ns_data": {
00:20:21.445  "can_share": true,
00:20:21.445  "id": 1
00:20:21.445  },
00:20:21.445  "trid": {
00:20:21.445  "adrfam": "IPv4",
00:20:21.445  "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:20:21.445  "traddr": "10.0.0.2",
00:20:21.445  "trsvcid": "4421",
00:20:21.445  "trtype": "TCP"
00:20:21.445  },
00:20:21.445  "vs": {
00:20:21.445  "nvme_version": "1.3"
00:20:21.445  }
00:20:21.445  }
00:20:21.445  ]
00:20:21.445  },
00:20:21.445  "name": "nvme0n1",
00:20:21.445  "num_blocks": 2097152,
00:20:21.445  "product_name": "NVMe disk",
00:20:21.445  "supported_io_types": {
00:20:21.445  "abort": true,
00:20:21.445  "compare": true,
00:20:21.445  "compare_and_write": true,
00:20:21.445  "flush": true,
00:20:21.445  "nvme_admin": true,
00:20:21.445  "nvme_io": true,
00:20:21.445  "read": true,
00:20:21.445  "reset": true,
00:20:21.445  "unmap": false,
00:20:21.445  "write": true,
00:20:21.445  "write_zeroes": true
00:20:21.445  },
00:20:21.445  "uuid": "13674571-5386-408d-9195-2d4c7c3ed7ef",
00:20:21.445  "zoned": false
00:20:21.445  }
00:20:21.445  ]
00:20:21.445   06:30:38	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:21.445   06:30:38	-- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:20:21.445   06:30:38	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:21.445   06:30:38	-- common/autotest_common.sh@10 -- # set +x
00:20:21.445   06:30:38	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:21.445   06:30:38	-- host/async_init.sh@75 -- # rm -f /tmp/tmp.W37dRt0ErW
00:20:21.445   06:30:38	-- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT
00:20:21.445   06:30:38	-- host/async_init.sh@78 -- # nvmftestfini
00:20:21.445   06:30:38	-- nvmf/common.sh@476 -- # nvmfcleanup
00:20:21.445   06:30:38	-- nvmf/common.sh@116 -- # sync
00:20:21.445   06:30:38	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:20:21.445   06:30:38	-- nvmf/common.sh@119 -- # set +e
00:20:21.445   06:30:38	-- nvmf/common.sh@120 -- # for i in {1..20}
00:20:21.445   06:30:38	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:20:21.445  rmmod nvme_tcp
00:20:21.445  rmmod nvme_fabrics
00:20:21.445  rmmod nvme_keyring
00:20:21.445   06:30:38	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:20:21.445   06:30:38	-- nvmf/common.sh@123 -- # set -e
00:20:21.445   06:30:38	-- nvmf/common.sh@124 -- # return 0
00:20:21.445   06:30:38	-- nvmf/common.sh@477 -- # '[' -n 82448 ']'
00:20:21.445   06:30:38	-- nvmf/common.sh@478 -- # killprocess 82448
00:20:21.445   06:30:38	-- common/autotest_common.sh@936 -- # '[' -z 82448 ']'
00:20:21.445   06:30:38	-- common/autotest_common.sh@940 -- # kill -0 82448
00:20:21.445    06:30:38	-- common/autotest_common.sh@941 -- # uname
00:20:21.445   06:30:38	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:20:21.445    06:30:38	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82448
00:20:21.445   06:30:38	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:20:21.445   06:30:38	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:20:21.445  killing process with pid 82448
00:20:21.445   06:30:38	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 82448'
00:20:21.445   06:30:38	-- common/autotest_common.sh@955 -- # kill 82448
00:20:21.445   06:30:38	-- common/autotest_common.sh@960 -- # wait 82448
00:20:21.704   06:30:38	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:20:21.704   06:30:38	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:20:21.704   06:30:38	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:20:21.704   06:30:38	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:20:21.704   06:30:38	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:20:21.704   06:30:38	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:20:21.704   06:30:38	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:20:21.704    06:30:38	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:20:21.704   06:30:38	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:20:21.704  
00:20:21.704  real	0m2.612s
00:20:21.704  user	0m2.420s
00:20:21.704  sys	0m0.613s
00:20:21.704   06:30:38	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:20:21.704   06:30:38	-- common/autotest_common.sh@10 -- # set +x
00:20:21.704  ************************************
00:20:21.704  END TEST nvmf_async_init
00:20:21.704  ************************************
00:20:21.704   06:30:38	-- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp
00:20:21.704   06:30:38	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:20:21.704   06:30:38	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:20:21.704   06:30:38	-- common/autotest_common.sh@10 -- # set +x
00:20:21.704  ************************************
00:20:21.704  START TEST dma
00:20:21.704  ************************************
00:20:21.704   06:30:38	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp
00:20:21.964  * Looking for test storage...
00:20:21.964  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:20:21.964    06:30:38	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:20:21.964     06:30:38	-- common/autotest_common.sh@1690 -- # lcov --version
00:20:21.964     06:30:38	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:20:21.964    06:30:38	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:20:21.964    06:30:38	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:20:21.964    06:30:38	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:20:21.964    06:30:38	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:20:21.964    06:30:38	-- scripts/common.sh@335 -- # IFS=.-:
00:20:21.964    06:30:38	-- scripts/common.sh@335 -- # read -ra ver1
00:20:21.964    06:30:38	-- scripts/common.sh@336 -- # IFS=.-:
00:20:21.964    06:30:38	-- scripts/common.sh@336 -- # read -ra ver2
00:20:21.964    06:30:38	-- scripts/common.sh@337 -- # local 'op=<'
00:20:21.964    06:30:38	-- scripts/common.sh@339 -- # ver1_l=2
00:20:21.964    06:30:38	-- scripts/common.sh@340 -- # ver2_l=1
00:20:21.964    06:30:38	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:20:21.964    06:30:38	-- scripts/common.sh@343 -- # case "$op" in
00:20:21.964    06:30:38	-- scripts/common.sh@344 -- # : 1
00:20:21.964    06:30:38	-- scripts/common.sh@363 -- # (( v = 0 ))
00:20:21.964    06:30:38	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:21.964     06:30:38	-- scripts/common.sh@364 -- # decimal 1
00:20:21.964     06:30:38	-- scripts/common.sh@352 -- # local d=1
00:20:21.964     06:30:38	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:21.964     06:30:38	-- scripts/common.sh@354 -- # echo 1
00:20:21.964    06:30:38	-- scripts/common.sh@364 -- # ver1[v]=1
00:20:21.964     06:30:38	-- scripts/common.sh@365 -- # decimal 2
00:20:21.964     06:30:38	-- scripts/common.sh@352 -- # local d=2
00:20:21.964     06:30:38	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:21.964     06:30:38	-- scripts/common.sh@354 -- # echo 2
00:20:21.964    06:30:38	-- scripts/common.sh@365 -- # ver2[v]=2
00:20:21.964    06:30:38	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:20:21.964    06:30:38	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:20:21.964    06:30:38	-- scripts/common.sh@367 -- # return 0
00:20:21.964    06:30:38	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:21.965    06:30:38	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:20:21.965  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:21.965  		--rc genhtml_branch_coverage=1
00:20:21.965  		--rc genhtml_function_coverage=1
00:20:21.965  		--rc genhtml_legend=1
00:20:21.965  		--rc geninfo_all_blocks=1
00:20:21.965  		--rc geninfo_unexecuted_blocks=1
00:20:21.965  		
00:20:21.965  		'
00:20:21.965    06:30:38	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:20:21.965  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:21.965  		--rc genhtml_branch_coverage=1
00:20:21.965  		--rc genhtml_function_coverage=1
00:20:21.965  		--rc genhtml_legend=1
00:20:21.965  		--rc geninfo_all_blocks=1
00:20:21.965  		--rc geninfo_unexecuted_blocks=1
00:20:21.965  		
00:20:21.965  		'
00:20:21.965    06:30:38	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:20:21.965  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:21.965  		--rc genhtml_branch_coverage=1
00:20:21.965  		--rc genhtml_function_coverage=1
00:20:21.965  		--rc genhtml_legend=1
00:20:21.965  		--rc geninfo_all_blocks=1
00:20:21.965  		--rc geninfo_unexecuted_blocks=1
00:20:21.965  		
00:20:21.965  		'
00:20:21.965    06:30:38	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:20:21.965  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:21.965  		--rc genhtml_branch_coverage=1
00:20:21.965  		--rc genhtml_function_coverage=1
00:20:21.965  		--rc genhtml_legend=1
00:20:21.965  		--rc geninfo_all_blocks=1
00:20:21.965  		--rc geninfo_unexecuted_blocks=1
00:20:21.965  		
00:20:21.965  		'
00:20:21.965   06:30:38	-- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:20:21.965     06:30:38	-- nvmf/common.sh@7 -- # uname -s
00:20:21.965    06:30:38	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:20:21.965    06:30:38	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:20:21.965    06:30:38	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:20:21.965    06:30:38	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:20:21.965    06:30:38	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:20:21.965    06:30:38	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:20:21.965    06:30:38	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:20:21.965    06:30:38	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:20:21.965    06:30:38	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:20:21.965     06:30:38	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:20:21.965    06:30:38	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:20:21.965    06:30:38	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:20:21.965    06:30:38	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:20:21.965    06:30:38	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:20:21.965    06:30:38	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:20:21.965    06:30:38	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:20:21.965     06:30:38	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:20:21.965     06:30:38	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:20:21.965     06:30:38	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:20:21.965      06:30:38	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:21.965      06:30:38	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:21.965      06:30:38	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:21.965      06:30:38	-- paths/export.sh@5 -- # export PATH
00:20:21.965      06:30:38	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:21.965    06:30:38	-- nvmf/common.sh@46 -- # : 0
00:20:21.965    06:30:38	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:20:21.965    06:30:38	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:20:21.965    06:30:38	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:20:21.965    06:30:38	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:20:21.965    06:30:38	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:20:21.965    06:30:38	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:20:21.965    06:30:38	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:20:21.965    06:30:38	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:20:21.965   06:30:38	-- host/dma.sh@12 -- # '[' tcp '!=' rdma ']'
00:20:21.965   06:30:38	-- host/dma.sh@13 -- # exit 0
00:20:21.965  ************************************
00:20:21.965  END TEST dma
00:20:21.965  ************************************
00:20:21.965  
00:20:21.965  real	0m0.215s
00:20:21.965  user	0m0.129s
00:20:21.965  sys	0m0.090s
00:20:21.965   06:30:38	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:20:21.965   06:30:38	-- common/autotest_common.sh@10 -- # set +x
00:20:21.965   06:30:38	-- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp
00:20:21.965   06:30:38	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:20:21.965   06:30:38	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:20:21.965   06:30:38	-- common/autotest_common.sh@10 -- # set +x
00:20:21.965  ************************************
00:20:21.965  START TEST nvmf_identify
00:20:21.965  ************************************
00:20:21.965   06:30:38	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp
00:20:22.225  * Looking for test storage...
00:20:22.225  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:20:22.225    06:30:38	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:20:22.225     06:30:38	-- common/autotest_common.sh@1690 -- # lcov --version
00:20:22.225     06:30:38	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:20:22.225    06:30:39	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:20:22.225    06:30:39	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:20:22.225    06:30:39	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:20:22.225    06:30:39	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:20:22.225    06:30:39	-- scripts/common.sh@335 -- # IFS=.-:
00:20:22.225    06:30:39	-- scripts/common.sh@335 -- # read -ra ver1
00:20:22.225    06:30:39	-- scripts/common.sh@336 -- # IFS=.-:
00:20:22.225    06:30:39	-- scripts/common.sh@336 -- # read -ra ver2
00:20:22.225    06:30:39	-- scripts/common.sh@337 -- # local 'op=<'
00:20:22.225    06:30:39	-- scripts/common.sh@339 -- # ver1_l=2
00:20:22.225    06:30:39	-- scripts/common.sh@340 -- # ver2_l=1
00:20:22.225    06:30:39	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:20:22.225    06:30:39	-- scripts/common.sh@343 -- # case "$op" in
00:20:22.225    06:30:39	-- scripts/common.sh@344 -- # : 1
00:20:22.225    06:30:39	-- scripts/common.sh@363 -- # (( v = 0 ))
00:20:22.225    06:30:39	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:22.225     06:30:39	-- scripts/common.sh@364 -- # decimal 1
00:20:22.225     06:30:39	-- scripts/common.sh@352 -- # local d=1
00:20:22.225     06:30:39	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:22.225     06:30:39	-- scripts/common.sh@354 -- # echo 1
00:20:22.225    06:30:39	-- scripts/common.sh@364 -- # ver1[v]=1
00:20:22.225     06:30:39	-- scripts/common.sh@365 -- # decimal 2
00:20:22.225     06:30:39	-- scripts/common.sh@352 -- # local d=2
00:20:22.225     06:30:39	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:22.225     06:30:39	-- scripts/common.sh@354 -- # echo 2
00:20:22.225    06:30:39	-- scripts/common.sh@365 -- # ver2[v]=2
00:20:22.225    06:30:39	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:20:22.225    06:30:39	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:20:22.225    06:30:39	-- scripts/common.sh@367 -- # return 0
00:20:22.225    06:30:39	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:22.225    06:30:39	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:20:22.225  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:22.225  		--rc genhtml_branch_coverage=1
00:20:22.225  		--rc genhtml_function_coverage=1
00:20:22.225  		--rc genhtml_legend=1
00:20:22.225  		--rc geninfo_all_blocks=1
00:20:22.225  		--rc geninfo_unexecuted_blocks=1
00:20:22.225  		
00:20:22.225  		'
00:20:22.225    06:30:39	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:20:22.225  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:22.225  		--rc genhtml_branch_coverage=1
00:20:22.225  		--rc genhtml_function_coverage=1
00:20:22.225  		--rc genhtml_legend=1
00:20:22.225  		--rc geninfo_all_blocks=1
00:20:22.225  		--rc geninfo_unexecuted_blocks=1
00:20:22.225  		
00:20:22.225  		'
00:20:22.225    06:30:39	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:20:22.225  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:22.225  		--rc genhtml_branch_coverage=1
00:20:22.225  		--rc genhtml_function_coverage=1
00:20:22.225  		--rc genhtml_legend=1
00:20:22.225  		--rc geninfo_all_blocks=1
00:20:22.225  		--rc geninfo_unexecuted_blocks=1
00:20:22.225  		
00:20:22.225  		'
00:20:22.225    06:30:39	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:20:22.225  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:22.225  		--rc genhtml_branch_coverage=1
00:20:22.225  		--rc genhtml_function_coverage=1
00:20:22.225  		--rc genhtml_legend=1
00:20:22.225  		--rc geninfo_all_blocks=1
00:20:22.225  		--rc geninfo_unexecuted_blocks=1
00:20:22.225  		
00:20:22.225  		'
00:20:22.225   06:30:39	-- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:20:22.225     06:30:39	-- nvmf/common.sh@7 -- # uname -s
00:20:22.225    06:30:39	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:20:22.225    06:30:39	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:20:22.226    06:30:39	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:20:22.226    06:30:39	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:20:22.226    06:30:39	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:20:22.226    06:30:39	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:20:22.226    06:30:39	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:20:22.226    06:30:39	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:20:22.226    06:30:39	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:20:22.226     06:30:39	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:20:22.226    06:30:39	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:20:22.226    06:30:39	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:20:22.226    06:30:39	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:20:22.226    06:30:39	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:20:22.226    06:30:39	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:20:22.226    06:30:39	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:20:22.226     06:30:39	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:20:22.226     06:30:39	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:20:22.226     06:30:39	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:20:22.226      06:30:39	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:22.226      06:30:39	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:22.226      06:30:39	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:22.226      06:30:39	-- paths/export.sh@5 -- # export PATH
00:20:22.226      06:30:39	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:22.226    06:30:39	-- nvmf/common.sh@46 -- # : 0
00:20:22.226    06:30:39	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:20:22.226    06:30:39	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:20:22.226    06:30:39	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:20:22.226    06:30:39	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:20:22.226    06:30:39	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:20:22.226    06:30:39	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:20:22.226    06:30:39	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:20:22.226    06:30:39	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:20:22.226   06:30:39	-- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64
00:20:22.226   06:30:39	-- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:20:22.226   06:30:39	-- host/identify.sh@14 -- # nvmftestinit
00:20:22.226   06:30:39	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:20:22.226   06:30:39	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:20:22.226   06:30:39	-- nvmf/common.sh@436 -- # prepare_net_devs
00:20:22.226   06:30:39	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:20:22.226   06:30:39	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:20:22.226   06:30:39	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:20:22.226   06:30:39	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:20:22.226    06:30:39	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:20:22.226   06:30:39	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:20:22.226   06:30:39	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:20:22.226   06:30:39	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:20:22.226   06:30:39	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:20:22.226   06:30:39	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:20:22.226   06:30:39	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:20:22.226   06:30:39	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:20:22.226   06:30:39	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:20:22.226   06:30:39	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:20:22.226   06:30:39	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:20:22.226   06:30:39	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:20:22.226   06:30:39	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:20:22.226   06:30:39	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:20:22.226   06:30:39	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:20:22.226   06:30:39	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:20:22.226   06:30:39	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:20:22.226   06:30:39	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:20:22.226   06:30:39	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:20:22.226   06:30:39	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:20:22.226   06:30:39	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:20:22.226  Cannot find device "nvmf_tgt_br"
00:20:22.226   06:30:39	-- nvmf/common.sh@154 -- # true
00:20:22.226   06:30:39	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:20:22.226  Cannot find device "nvmf_tgt_br2"
00:20:22.226   06:30:39	-- nvmf/common.sh@155 -- # true
00:20:22.226   06:30:39	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:20:22.226   06:30:39	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:20:22.226  Cannot find device "nvmf_tgt_br"
00:20:22.226   06:30:39	-- nvmf/common.sh@157 -- # true
00:20:22.226   06:30:39	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:20:22.226  Cannot find device "nvmf_tgt_br2"
00:20:22.226   06:30:39	-- nvmf/common.sh@158 -- # true
00:20:22.226   06:30:39	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:20:22.485   06:30:39	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:20:22.485   06:30:39	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:20:22.485  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:20:22.485   06:30:39	-- nvmf/common.sh@161 -- # true
00:20:22.485   06:30:39	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:20:22.485  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:20:22.485   06:30:39	-- nvmf/common.sh@162 -- # true
00:20:22.485   06:30:39	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:20:22.485   06:30:39	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:20:22.485   06:30:39	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:20:22.485   06:30:39	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:20:22.485   06:30:39	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:20:22.485   06:30:39	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:20:22.485   06:30:39	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:20:22.485   06:30:39	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:20:22.485   06:30:39	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:20:22.485   06:30:39	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:20:22.485   06:30:39	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:20:22.485   06:30:39	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:20:22.485   06:30:39	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:20:22.485   06:30:39	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:20:22.486   06:30:39	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:20:22.486   06:30:39	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:20:22.486   06:30:39	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:20:22.486   06:30:39	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:20:22.486   06:30:39	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:20:22.486   06:30:39	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:20:22.486   06:30:39	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:20:22.486   06:30:39	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:20:22.486   06:30:39	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:20:22.486   06:30:39	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:20:22.486  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:20:22.486  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms
00:20:22.486  
00:20:22.486  --- 10.0.0.2 ping statistics ---
00:20:22.486  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:22.486  rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms
00:20:22.486   06:30:39	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:20:22.486  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:20:22.486  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms
00:20:22.486  
00:20:22.486  --- 10.0.0.3 ping statistics ---
00:20:22.486  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:22.486  rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms
00:20:22.486   06:30:39	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:20:22.486  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:20:22.486  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms
00:20:22.486  
00:20:22.486  --- 10.0.0.1 ping statistics ---
00:20:22.486  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:22.486  rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms
00:20:22.486   06:30:39	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:20:22.486   06:30:39	-- nvmf/common.sh@421 -- # return 0
00:20:22.486   06:30:39	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:20:22.486   06:30:39	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:20:22.486   06:30:39	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:20:22.486   06:30:39	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:20:22.486   06:30:39	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:20:22.486   06:30:39	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:20:22.486   06:30:39	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:20:22.486   06:30:39	-- host/identify.sh@16 -- # timing_enter start_nvmf_tgt
00:20:22.486   06:30:39	-- common/autotest_common.sh@722 -- # xtrace_disable
00:20:22.486   06:30:39	-- common/autotest_common.sh@10 -- # set +x
00:20:22.745   06:30:39	-- host/identify.sh@19 -- # nvmfpid=82727
00:20:22.745   06:30:39	-- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:20:22.745   06:30:39	-- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:20:22.745   06:30:39	-- host/identify.sh@23 -- # waitforlisten 82727
00:20:22.745  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:22.745   06:30:39	-- common/autotest_common.sh@829 -- # '[' -z 82727 ']'
00:20:22.745   06:30:39	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:22.745   06:30:39	-- common/autotest_common.sh@834 -- # local max_retries=100
00:20:22.745   06:30:39	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:22.745   06:30:39	-- common/autotest_common.sh@838 -- # xtrace_disable
00:20:22.745   06:30:39	-- common/autotest_common.sh@10 -- # set +x
00:20:22.745  [2024-12-16 06:30:39.519655] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:20:22.745  [2024-12-16 06:30:39.519910] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:20:22.745  [2024-12-16 06:30:39.659443] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:20:23.004  [2024-12-16 06:30:39.741178] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:20:23.004  [2024-12-16 06:30:39.741560] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:20:23.004  [2024-12-16 06:30:39.741685] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:20:23.004  [2024-12-16 06:30:39.741802] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:20:23.004  [2024-12-16 06:30:39.742249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:20:23.004  [2024-12-16 06:30:39.742455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:20:23.004  [2024-12-16 06:30:39.742558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:20:23.004  [2024-12-16 06:30:39.742567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:20:23.944   06:30:40	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:20:23.944   06:30:40	-- common/autotest_common.sh@862 -- # return 0
00:20:23.944   06:30:40	-- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:20:23.944   06:30:40	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:23.944   06:30:40	-- common/autotest_common.sh@10 -- # set +x
00:20:23.944  [2024-12-16 06:30:40.562171] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:20:23.944   06:30:40	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:23.944   06:30:40	-- host/identify.sh@25 -- # timing_exit start_nvmf_tgt
00:20:23.944   06:30:40	-- common/autotest_common.sh@728 -- # xtrace_disable
00:20:23.944   06:30:40	-- common/autotest_common.sh@10 -- # set +x
00:20:23.944   06:30:40	-- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:20:23.944   06:30:40	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:23.944   06:30:40	-- common/autotest_common.sh@10 -- # set +x
00:20:23.944  Malloc0
00:20:23.944   06:30:40	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:23.944   06:30:40	-- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:20:23.944   06:30:40	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:23.944   06:30:40	-- common/autotest_common.sh@10 -- # set +x
00:20:23.944   06:30:40	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:23.944   06:30:40	-- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789
00:20:23.944   06:30:40	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:23.944   06:30:40	-- common/autotest_common.sh@10 -- # set +x
00:20:23.944   06:30:40	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:23.944   06:30:40	-- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:20:23.944   06:30:40	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:23.944   06:30:40	-- common/autotest_common.sh@10 -- # set +x
00:20:23.944  [2024-12-16 06:30:40.675762] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:20:23.944   06:30:40	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:23.944   06:30:40	-- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:20:23.944   06:30:40	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:23.944   06:30:40	-- common/autotest_common.sh@10 -- # set +x
00:20:23.944   06:30:40	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:23.944   06:30:40	-- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems
00:20:23.944   06:30:40	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:23.944   06:30:40	-- common/autotest_common.sh@10 -- # set +x
00:20:23.944  [2024-12-16 06:30:40.691524] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05
00:20:23.944  [
00:20:23.944  {
00:20:23.944  "allow_any_host": true,
00:20:23.944  "hosts": [],
00:20:23.944  "listen_addresses": [
00:20:23.944  {
00:20:23.944  "adrfam": "IPv4",
00:20:23.944  "traddr": "10.0.0.2",
00:20:23.944  "transport": "TCP",
00:20:23.944  "trsvcid": "4420",
00:20:23.944  "trtype": "TCP"
00:20:23.944  }
00:20:23.944  ],
00:20:23.944  "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:20:23.944  "subtype": "Discovery"
00:20:23.944  },
00:20:23.944  {
00:20:23.944  "allow_any_host": true,
00:20:23.944  "hosts": [],
00:20:23.944  "listen_addresses": [
00:20:23.944  {
00:20:23.944  "adrfam": "IPv4",
00:20:23.944  "traddr": "10.0.0.2",
00:20:23.944  "transport": "TCP",
00:20:23.944  "trsvcid": "4420",
00:20:23.944  "trtype": "TCP"
00:20:23.944  }
00:20:23.944  ],
00:20:23.944  "max_cntlid": 65519,
00:20:23.944  "max_namespaces": 32,
00:20:23.944  "min_cntlid": 1,
00:20:23.944  "model_number": "SPDK bdev Controller",
00:20:23.944  "namespaces": [
00:20:23.944  {
00:20:23.944  "bdev_name": "Malloc0",
00:20:23.944  "eui64": "ABCDEF0123456789",
00:20:23.944  "name": "Malloc0",
00:20:23.944  "nguid": "ABCDEF0123456789ABCDEF0123456789",
00:20:23.944  "nsid": 1,
00:20:23.944  "uuid": "826d191e-d2fd-4ff4-86e5-7226fa98ceed"
00:20:23.944  }
00:20:23.944  ],
00:20:23.944  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:20:23.944  "serial_number": "SPDK00000000000001",
00:20:23.944  "subtype": "NVMe"
00:20:23.944  }
00:20:23.944  ]
00:20:23.944   06:30:40	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:23.944   06:30:40	-- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r '        trtype:tcp         adrfam:IPv4         traddr:10.0.0.2         trsvcid:4420         subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all
00:20:23.944  [2024-12-16 06:30:40.731458] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:20:23.944  [2024-12-16 06:30:40.731675] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82786 ]
00:20:23.944  [2024-12-16 06:30:40.863088] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout)
00:20:23.944  [2024-12-16 06:30:40.863158] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2
00:20:23.944  [2024-12-16 06:30:40.863164] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420
00:20:23.944  [2024-12-16 06:30:40.863173] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null)
00:20:23.944  [2024-12-16 06:30:40.863181] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix
00:20:23.944  [2024-12-16 06:30:40.863336] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout)
00:20:23.944  [2024-12-16 06:30:40.863389] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c56d30 0
00:20:23.944  [2024-12-16 06:30:40.867564] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1
00:20:23.944  [2024-12-16 06:30:40.867603] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1
00:20:23.944  [2024-12-16 06:30:40.867608] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0
00:20:23.944  [2024-12-16 06:30:40.867612] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0
00:20:23.944  [2024-12-16 06:30:40.867657] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:23.944  [2024-12-16 06:30:40.867665] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:23.944  [2024-12-16 06:30:40.867668] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c56d30)
00:20:23.944  [2024-12-16 06:30:40.867696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400
00:20:23.944  [2024-12-16 06:30:40.867742] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb4f30, cid 0, qid 0
00:20:23.944  [2024-12-16 06:30:40.875520] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:23.944  [2024-12-16 06:30:40.875540] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:23.944  [2024-12-16 06:30:40.875560] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:23.944  [2024-12-16 06:30:40.875564] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb4f30) on tqpair=0x1c56d30
00:20:23.944  [2024-12-16 06:30:40.875574] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001
00:20:23.944  [2024-12-16 06:30:40.875581] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout)
00:20:23.944  [2024-12-16 06:30:40.875587] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout)
00:20:23.944  [2024-12-16 06:30:40.875602] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:23.944  [2024-12-16 06:30:40.875607] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:23.944  [2024-12-16 06:30:40.875610] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c56d30)
00:20:23.944  [2024-12-16 06:30:40.875619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:23.944  [2024-12-16 06:30:40.875646] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb4f30, cid 0, qid 0
00:20:23.944  [2024-12-16 06:30:40.875708] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:23.944  [2024-12-16 06:30:40.875714] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:23.944  [2024-12-16 06:30:40.875717] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:23.944  [2024-12-16 06:30:40.875721] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb4f30) on tqpair=0x1c56d30
00:20:23.944  [2024-12-16 06:30:40.875726] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout)
00:20:23.944  [2024-12-16 06:30:40.875733] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout)
00:20:23.944  [2024-12-16 06:30:40.875740] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:23.944  [2024-12-16 06:30:40.875743] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:23.944  [2024-12-16 06:30:40.875746] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c56d30)
00:20:23.944  [2024-12-16 06:30:40.875753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:23.944  [2024-12-16 06:30:40.875803] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb4f30, cid 0, qid 0
00:20:23.944  [2024-12-16 06:30:40.875860] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:23.944  [2024-12-16 06:30:40.875866] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:23.945  [2024-12-16 06:30:40.875869] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:23.945  [2024-12-16 06:30:40.875873] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb4f30) on tqpair=0x1c56d30
00:20:23.945  [2024-12-16 06:30:40.875879] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout)
00:20:23.945  [2024-12-16 06:30:40.875887] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms)
00:20:23.945  [2024-12-16 06:30:40.875893] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:23.945  [2024-12-16 06:30:40.875897] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:23.945  [2024-12-16 06:30:40.875900] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c56d30)
00:20:23.945  [2024-12-16 06:30:40.875907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:23.945  [2024-12-16 06:30:40.875925] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb4f30, cid 0, qid 0
00:20:23.945  [2024-12-16 06:30:40.875983] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:23.945  [2024-12-16 06:30:40.875989] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:23.945  [2024-12-16 06:30:40.875992] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:23.945  [2024-12-16 06:30:40.875996] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb4f30) on tqpair=0x1c56d30
00:20:23.945  [2024-12-16 06:30:40.876002] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms)
00:20:23.945  [2024-12-16 06:30:40.876011] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:23.945  [2024-12-16 06:30:40.876015] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:23.945  [2024-12-16 06:30:40.876019] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c56d30)
00:20:23.945  [2024-12-16 06:30:40.876025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:23.945  [2024-12-16 06:30:40.876043] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb4f30, cid 0, qid 0
00:20:23.945  [2024-12-16 06:30:40.876107] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:23.945  [2024-12-16 06:30:40.876113] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:23.945  [2024-12-16 06:30:40.876116] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:23.945  [2024-12-16 06:30:40.876119] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb4f30) on tqpair=0x1c56d30
00:20:23.945  [2024-12-16 06:30:40.876125] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0
00:20:23.945  [2024-12-16 06:30:40.876130] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms)
00:20:23.945  [2024-12-16 06:30:40.876137] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms)
00:20:23.945  [2024-12-16 06:30:40.876242] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1
00:20:23.945  [2024-12-16 06:30:40.876248] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms)
00:20:23.945  [2024-12-16 06:30:40.876256] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:23.945  [2024-12-16 06:30:40.876260] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:23.945  [2024-12-16 06:30:40.876263] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c56d30)
00:20:23.945  [2024-12-16 06:30:40.876270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:23.945  [2024-12-16 06:30:40.876288] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb4f30, cid 0, qid 0
00:20:23.945  [2024-12-16 06:30:40.876350] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:23.945  [2024-12-16 06:30:40.876356] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:23.945  [2024-12-16 06:30:40.876359] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:23.945  [2024-12-16 06:30:40.876363] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb4f30) on tqpair=0x1c56d30
00:20:23.945  [2024-12-16 06:30:40.876368] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms)
00:20:23.945  [2024-12-16 06:30:40.876377] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:23.945  [2024-12-16 06:30:40.876381] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:23.945  [2024-12-16 06:30:40.876384] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c56d30)
00:20:23.945  [2024-12-16 06:30:40.876391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:23.945  [2024-12-16 06:30:40.876408] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb4f30, cid 0, qid 0
00:20:23.945  [2024-12-16 06:30:40.876464] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:23.945  [2024-12-16 06:30:40.876469] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:23.945  [2024-12-16 06:30:40.876473] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:23.945  [2024-12-16 06:30:40.876476] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb4f30) on tqpair=0x1c56d30
00:20:23.945  [2024-12-16 06:30:40.876482] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready
00:20:23.945  [2024-12-16 06:30:40.876487] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms)
00:20:23.945  [2024-12-16 06:30:40.876508] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout)
00:20:23.945  [2024-12-16 06:30:40.876521] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms)
00:20:23.945  [2024-12-16 06:30:40.876542] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:23.945  [2024-12-16 06:30:40.876547] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:23.945  [2024-12-16 06:30:40.876551] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c56d30)
00:20:23.945  [2024-12-16 06:30:40.876558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:23.945  [2024-12-16 06:30:40.876578] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb4f30, cid 0, qid 0
00:20:23.945  [2024-12-16 06:30:40.876685] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:20:23.945  [2024-12-16 06:30:40.876692] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:20:23.945  [2024-12-16 06:30:40.876695] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:20:23.945  [2024-12-16 06:30:40.876699] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c56d30): datao=0, datal=4096, cccid=0
00:20:23.945  [2024-12-16 06:30:40.876704] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cb4f30) on tqpair(0x1c56d30): expected_datao=0, payload_size=4096
00:20:23.945  [2024-12-16 06:30:40.876712] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:20:23.945  [2024-12-16 06:30:40.876716] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:20:23.945  [2024-12-16 06:30:40.876725] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:23.945  [2024-12-16 06:30:40.876730] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:23.945  [2024-12-16 06:30:40.876733] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:23.945  [2024-12-16 06:30:40.876737] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb4f30) on tqpair=0x1c56d30
00:20:23.945  [2024-12-16 06:30:40.876746] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295
00:20:23.945  [2024-12-16 06:30:40.876751] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072
00:20:23.945  [2024-12-16 06:30:40.876755] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001
00:20:23.945  [2024-12-16 06:30:40.876760] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16
00:20:23.945  [2024-12-16 06:30:40.876764] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1
00:20:23.945  [2024-12-16 06:30:40.876769] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms)
00:20:23.945  [2024-12-16 06:30:40.876781] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms)
00:20:23.945  [2024-12-16 06:30:40.876789] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:23.945  [2024-12-16 06:30:40.876793] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:23.945  [2024-12-16 06:30:40.876796] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c56d30)
00:20:23.945  [2024-12-16 06:30:40.876804] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0
00:20:23.945  [2024-12-16 06:30:40.876835] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb4f30, cid 0, qid 0
00:20:23.945  [2024-12-16 06:30:40.876925] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:23.945  [2024-12-16 06:30:40.876931] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:23.945  [2024-12-16 06:30:40.876935] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:23.945  [2024-12-16 06:30:40.876939] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb4f30) on tqpair=0x1c56d30
00:20:23.945  [2024-12-16 06:30:40.876947] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:23.945  [2024-12-16 06:30:40.876951] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:23.945  [2024-12-16 06:30:40.876954] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c56d30)
00:20:23.945  [2024-12-16 06:30:40.876960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:23.945  [2024-12-16 06:30:40.876966] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:23.945  [2024-12-16 06:30:40.876970] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:23.946  [2024-12-16 06:30:40.876973] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c56d30)
00:20:23.946  [2024-12-16 06:30:40.876978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:23.946  [2024-12-16 06:30:40.876984] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:23.946  [2024-12-16 06:30:40.876987] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:23.946  [2024-12-16 06:30:40.876991] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c56d30)
00:20:23.946  [2024-12-16 06:30:40.876996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:23.946  [2024-12-16 06:30:40.877001] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:23.946  [2024-12-16 06:30:40.877005] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:23.946  [2024-12-16 06:30:40.877008] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:23.946  [2024-12-16 06:30:40.877013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:23.946  [2024-12-16 06:30:40.877018] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms)
00:20:23.946  [2024-12-16 06:30:40.877030] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms)
00:20:23.946  [2024-12-16 06:30:40.877036] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:23.946  [2024-12-16 06:30:40.877040] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:23.946  [2024-12-16 06:30:40.877043] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c56d30)
00:20:23.946  [2024-12-16 06:30:40.877050] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:23.946  [2024-12-16 06:30:40.877070] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb4f30, cid 0, qid 0
00:20:23.946  [2024-12-16 06:30:40.877076] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5090, cid 1, qid 0
00:20:23.946  [2024-12-16 06:30:40.877081] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb51f0, cid 2, qid 0
00:20:23.946  [2024-12-16 06:30:40.877085] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:23.946  [2024-12-16 06:30:40.877089] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb54b0, cid 4, qid 0
00:20:23.946  [2024-12-16 06:30:40.877202] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:23.946  [2024-12-16 06:30:40.877208] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:23.946  [2024-12-16 06:30:40.877211] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:23.946  [2024-12-16 06:30:40.877215] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb54b0) on tqpair=0x1c56d30
00:20:23.946  [2024-12-16 06:30:40.877221] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us
00:20:23.946  [2024-12-16 06:30:40.877226] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout)
00:20:23.946  [2024-12-16 06:30:40.877236] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:23.946  [2024-12-16 06:30:40.877241] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:23.946  [2024-12-16 06:30:40.877244] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c56d30)
00:20:23.946  [2024-12-16 06:30:40.877251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:23.946  [2024-12-16 06:30:40.877269] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb54b0, cid 4, qid 0
00:20:23.946  [2024-12-16 06:30:40.877345] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:20:23.946  [2024-12-16 06:30:40.877351] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:20:23.946  [2024-12-16 06:30:40.877355] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:20:23.946  [2024-12-16 06:30:40.877358] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c56d30): datao=0, datal=4096, cccid=4
00:20:23.946  [2024-12-16 06:30:40.877362] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cb54b0) on tqpair(0x1c56d30): expected_datao=0, payload_size=4096
00:20:23.946  [2024-12-16 06:30:40.877369] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:20:23.946  [2024-12-16 06:30:40.877373] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:20:23.946  [2024-12-16 06:30:40.877381] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:23.946  [2024-12-16 06:30:40.877386] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:23.946  [2024-12-16 06:30:40.877390] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:23.946  [2024-12-16 06:30:40.877393] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb54b0) on tqpair=0x1c56d30
00:20:23.946  [2024-12-16 06:30:40.877406] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state
00:20:23.946  [2024-12-16 06:30:40.877431] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:23.946  [2024-12-16 06:30:40.877436] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:23.946  [2024-12-16 06:30:40.877440] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c56d30)
00:20:23.946  [2024-12-16 06:30:40.877447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:23.946  [2024-12-16 06:30:40.877454] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:23.946  [2024-12-16 06:30:40.877457] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:23.946  [2024-12-16 06:30:40.877460] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c56d30)
00:20:23.946  [2024-12-16 06:30:40.877466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:23.946  [2024-12-16 06:30:40.877501] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb54b0, cid 4, qid 0
00:20:23.946  [2024-12-16 06:30:40.877512] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5610, cid 5, qid 0
00:20:23.946  [2024-12-16 06:30:40.877670] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:20:23.946  [2024-12-16 06:30:40.877685] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:20:23.946  [2024-12-16 06:30:40.877690] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:20:23.946  [2024-12-16 06:30:40.877693] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c56d30): datao=0, datal=1024, cccid=4
00:20:23.946  [2024-12-16 06:30:40.877698] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cb54b0) on tqpair(0x1c56d30): expected_datao=0, payload_size=1024
00:20:23.946  [2024-12-16 06:30:40.877705] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:20:23.946  [2024-12-16 06:30:40.877708] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:20:23.946  [2024-12-16 06:30:40.877714] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:23.946  [2024-12-16 06:30:40.877719] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:23.946  [2024-12-16 06:30:40.877723] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:23.946  [2024-12-16 06:30:40.877726] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5610) on tqpair=0x1c56d30
00:20:24.209  [2024-12-16 06:30:40.921516] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.209  [2024-12-16 06:30:40.921536] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.209  [2024-12-16 06:30:40.921556] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.209  [2024-12-16 06:30:40.921560] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb54b0) on tqpair=0x1c56d30
00:20:24.209  [2024-12-16 06:30:40.921581] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.209  [2024-12-16 06:30:40.921587] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.209  [2024-12-16 06:30:40.921590] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c56d30)
00:20:24.209  [2024-12-16 06:30:40.921598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.209  [2024-12-16 06:30:40.921636] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb54b0, cid 4, qid 0
00:20:24.209  [2024-12-16 06:30:40.921712] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:20:24.209  [2024-12-16 06:30:40.921718] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:20:24.209  [2024-12-16 06:30:40.921721] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:20:24.209  [2024-12-16 06:30:40.921724] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c56d30): datao=0, datal=3072, cccid=4
00:20:24.209  [2024-12-16 06:30:40.921729] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cb54b0) on tqpair(0x1c56d30): expected_datao=0, payload_size=3072
00:20:24.209  [2024-12-16 06:30:40.921736] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:20:24.209  [2024-12-16 06:30:40.921740] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:20:24.209  [2024-12-16 06:30:40.921747] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.209  [2024-12-16 06:30:40.921753] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.209  [2024-12-16 06:30:40.921756] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.209  [2024-12-16 06:30:40.921759] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb54b0) on tqpair=0x1c56d30
00:20:24.209  [2024-12-16 06:30:40.921770] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.209  [2024-12-16 06:30:40.921774] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.209  [2024-12-16 06:30:40.921777] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c56d30)
00:20:24.209  [2024-12-16 06:30:40.921799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.209  [2024-12-16 06:30:40.921840] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb54b0, cid 4, qid 0
00:20:24.209  [2024-12-16 06:30:40.921923] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:20:24.209  [2024-12-16 06:30:40.921930] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:20:24.209  [2024-12-16 06:30:40.921933] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:20:24.209  [2024-12-16 06:30:40.921936] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c56d30): datao=0, datal=8, cccid=4
00:20:24.209  [2024-12-16 06:30:40.921941] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cb54b0) on tqpair(0x1c56d30): expected_datao=0, payload_size=8
00:20:24.209  [2024-12-16 06:30:40.921947] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:20:24.209  [2024-12-16 06:30:40.921951] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:20:24.209  =====================================================
00:20:24.209  NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery
00:20:24.209  =====================================================
00:20:24.209  Controller Capabilities/Features
00:20:24.209  ================================
00:20:24.209  Vendor ID:                             0000
00:20:24.209  Subsystem Vendor ID:                   0000
00:20:24.209  Serial Number:                         ....................
00:20:24.209  Model Number:                          ........................................
00:20:24.209  Firmware Version:                      24.01.1
00:20:24.209  Recommended Arb Burst:                 0
00:20:24.209  IEEE OUI Identifier:                   00 00 00
00:20:24.209  Multi-path I/O
00:20:24.209    May have multiple subsystem ports:   No
00:20:24.209    May have multiple controllers:       No
00:20:24.209    Associated with SR-IOV VF:           No
00:20:24.209  Max Data Transfer Size:                131072
00:20:24.209  Max Number of Namespaces:              0
00:20:24.209  Max Number of I/O Queues:              1024
00:20:24.209  NVMe Specification Version (VS):       1.3
00:20:24.209  NVMe Specification Version (Identify): 1.3
00:20:24.209  Maximum Queue Entries:                 128
00:20:24.209  Contiguous Queues Required:            Yes
00:20:24.209  Arbitration Mechanisms Supported
00:20:24.209    Weighted Round Robin:                Not Supported
00:20:24.209    Vendor Specific:                     Not Supported
00:20:24.209  Reset Timeout:                         15000 ms
00:20:24.209  Doorbell Stride:                       4 bytes
00:20:24.209  NVM Subsystem Reset:                   Not Supported
00:20:24.209  Command Sets Supported
00:20:24.209    NVM Command Set:                     Supported
00:20:24.209  Boot Partition:                        Not Supported
00:20:24.209  Memory Page Size Minimum:              4096 bytes
00:20:24.209  Memory Page Size Maximum:              4096 bytes
00:20:24.209  Persistent Memory Region:              Not Supported
00:20:24.209  Optional Asynchronous Events Supported
00:20:24.209    Namespace Attribute Notices:         Not Supported
00:20:24.209    Firmware Activation Notices:         Not Supported
00:20:24.209    ANA Change Notices:                  Not Supported
00:20:24.209    PLE Aggregate Log Change Notices:    Not Supported
00:20:24.209    LBA Status Info Alert Notices:       Not Supported
00:20:24.209    EGE Aggregate Log Change Notices:    Not Supported
00:20:24.210    Normal NVM Subsystem Shutdown event: Not Supported
00:20:24.210    Zone Descriptor Change Notices:      Not Supported
00:20:24.210    Discovery Log Change Notices:        Supported
00:20:24.210  Controller Attributes
00:20:24.210    128-bit Host Identifier:             Not Supported
00:20:24.210    Non-Operational Permissive Mode:     Not Supported
00:20:24.210    NVM Sets:                            Not Supported
00:20:24.210    Read Recovery Levels:                Not Supported
00:20:24.210    Endurance Groups:                    Not Supported
00:20:24.210    Predictable Latency Mode:            Not Supported
00:20:24.210    Traffic Based Keep ALive:            Not Supported
00:20:24.210    Namespace Granularity:               Not Supported
00:20:24.210    SQ Associations:                     Not Supported
00:20:24.210    UUID List:                           Not Supported
00:20:24.210    Multi-Domain Subsystem:              Not Supported
00:20:24.210    Fixed Capacity Management:           Not Supported
00:20:24.210    Variable Capacity Management:        Not Supported
00:20:24.210    Delete Endurance Group:              Not Supported
00:20:24.210    Delete NVM Set:                      Not Supported
00:20:24.210    Extended LBA Formats Supported:      Not Supported
00:20:24.210    Flexible Data Placement Supported:   Not Supported
00:20:24.210  
00:20:24.210  Controller Memory Buffer Support
00:20:24.210  ================================
00:20:24.210  Supported:                             No
00:20:24.210  
00:20:24.210  Persistent Memory Region Support
00:20:24.210  ================================
00:20:24.210  Supported:                             No
00:20:24.210  
00:20:24.210  Admin Command Set Attributes
00:20:24.210  ============================
00:20:24.210  Security Send/Receive:                 Not Supported
00:20:24.210  Format NVM:                            Not Supported
00:20:24.210  Firmware Activate/Download:            Not Supported
00:20:24.210  Namespace Management:                  Not Supported
00:20:24.210  Device Self-Test:                      Not Supported
00:20:24.210  Directives:                            Not Supported
00:20:24.210  NVMe-MI:                               Not Supported
00:20:24.210  Virtualization Management:             Not Supported
00:20:24.210  Doorbell Buffer Config:                Not Supported
00:20:24.210  Get LBA Status Capability:             Not Supported
00:20:24.210  Command & Feature Lockdown Capability: Not Supported
00:20:24.210  Abort Command Limit:                   1
00:20:24.210  Async Event Request Limit:             4
00:20:24.210  Number of Firmware Slots:              N/A
00:20:24.210  Firmware Slot 1 Read-Only:             N/A
00:20:24.210  Fi[2024-12-16 06:30:40.963540] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.210  [2024-12-16 06:30:40.963560] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.210  [2024-12-16 06:30:40.963580] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.210  [2024-12-16 06:30:40.963584] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb54b0) on tqpair=0x1c56d30
00:20:24.210  rmware Activation Without Reset:     N/A
00:20:24.210  Multiple Update Detection Support:     N/A
00:20:24.210  Firmware Update Granularity:           No Information Provided
00:20:24.210  Per-Namespace SMART Log:               No
00:20:24.210  Asymmetric Namespace Access Log Page:  Not Supported
00:20:24.210  Subsystem NQN:                         nqn.2014-08.org.nvmexpress.discovery
00:20:24.210  Command Effects Log Page:              Not Supported
00:20:24.210  Get Log Page Extended Data:            Supported
00:20:24.210  Telemetry Log Pages:                   Not Supported
00:20:24.210  Persistent Event Log Pages:            Not Supported
00:20:24.210  Supported Log Pages Log Page:          May Support
00:20:24.210  Commands Supported & Effects Log Page: Not Supported
00:20:24.210  Feature Identifiers & Effects Log Page:May Support
00:20:24.210  NVMe-MI Commands & Effects Log Page:   May Support
00:20:24.210  Data Area 4 for Telemetry Log:         Not Supported
00:20:24.210  Error Log Page Entries Supported:      128
00:20:24.210  Keep Alive:                            Not Supported
00:20:24.210  
00:20:24.210  NVM Command Set Attributes
00:20:24.210  ==========================
00:20:24.210  Submission Queue Entry Size
00:20:24.210    Max:                       1
00:20:24.210    Min:                       1
00:20:24.210  Completion Queue Entry Size
00:20:24.210    Max:                       1
00:20:24.210    Min:                       1
00:20:24.210  Number of Namespaces:        0
00:20:24.210  Compare Command:             Not Supported
00:20:24.210  Write Uncorrectable Command: Not Supported
00:20:24.210  Dataset Management Command:  Not Supported
00:20:24.210  Write Zeroes Command:        Not Supported
00:20:24.210  Set Features Save Field:     Not Supported
00:20:24.210  Reservations:                Not Supported
00:20:24.210  Timestamp:                   Not Supported
00:20:24.210  Copy:                        Not Supported
00:20:24.210  Volatile Write Cache:        Not Present
00:20:24.210  Atomic Write Unit (Normal):  1
00:20:24.210  Atomic Write Unit (PFail):   1
00:20:24.210  Atomic Compare & Write Unit: 1
00:20:24.210  Fused Compare & Write:       Supported
00:20:24.210  Scatter-Gather List
00:20:24.210    SGL Command Set:           Supported
00:20:24.210    SGL Keyed:                 Supported
00:20:24.210    SGL Bit Bucket Descriptor: Not Supported
00:20:24.210    SGL Metadata Pointer:      Not Supported
00:20:24.210    Oversized SGL:             Not Supported
00:20:24.210    SGL Metadata Address:      Not Supported
00:20:24.210    SGL Offset:                Supported
00:20:24.210    Transport SGL Data Block:  Not Supported
00:20:24.210  Replay Protected Memory Block:  Not Supported
00:20:24.210  
00:20:24.210  Firmware Slot Information
00:20:24.210  =========================
00:20:24.210  Active slot:                 0
00:20:24.210  
00:20:24.210  
00:20:24.210  Error Log
00:20:24.210  =========
00:20:24.210  
00:20:24.210  Active Namespaces
00:20:24.210  =================
00:20:24.210  Discovery Log Page
00:20:24.210  ==================
00:20:24.210  Generation Counter:                    2
00:20:24.210  Number of Records:                     2
00:20:24.210  Record Format:                         0
00:20:24.210  
00:20:24.210  Discovery Log Entry 0
00:20:24.210  ----------------------
00:20:24.210  Transport Type:                        3 (TCP)
00:20:24.210  Address Family:                        1 (IPv4)
00:20:24.210  Subsystem Type:                        3 (Current Discovery Subsystem)
00:20:24.210  Entry Flags:
00:20:24.210    Duplicate Returned Information:			1
00:20:24.210    Explicit Persistent Connection Support for Discovery: 1
00:20:24.210  Transport Requirements:
00:20:24.210    Secure Channel:                      Not Required
00:20:24.210  Port ID:                               0 (0x0000)
00:20:24.210  Controller ID:                         65535 (0xffff)
00:20:24.210  Admin Max SQ Size:                     128
00:20:24.210  Transport Service Identifier:          4420                            
00:20:24.210  NVM Subsystem Qualified Name:          nqn.2014-08.org.nvmexpress.discovery
00:20:24.210  Transport Address:                     10.0.0.2                                                                                                                                                                                                                                                        
00:20:24.210  Discovery Log Entry 1
00:20:24.210  ----------------------
00:20:24.210  Transport Type:                        3 (TCP)
00:20:24.210  Address Family:                        1 (IPv4)
00:20:24.210  Subsystem Type:                        2 (NVM Subsystem)
00:20:24.210  Entry Flags:
00:20:24.210    Duplicate Returned Information:			0
00:20:24.210    Explicit Persistent Connection Support for Discovery: 0
00:20:24.210  Transport Requirements:
00:20:24.210    Secure Channel:                      Not Required
00:20:24.210  Port ID:                               0 (0x0000)
00:20:24.210  Controller ID:                         65535 (0xffff)
00:20:24.210  Admin Max SQ Size:                     128
00:20:24.210  Transport Service Identifier:          4420                            
00:20:24.211  NVM Subsystem Qualified Name:          nqn.2016-06.io.spdk:cnode1
00:20:24.211  Transport Address:                     10.0.0.2                            [2024-12-16 06:30:40.963675] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD
00:20:24.211  [2024-12-16 06:30:40.963691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:24.211  [2024-12-16 06:30:40.963697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:24.211  [2024-12-16 06:30:40.963703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:24.211  [2024-12-16 06:30:40.963708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:24.211  [2024-12-16 06:30:40.963717] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.963721] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.963724] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.211  [2024-12-16 06:30:40.963731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.211  [2024-12-16 06:30:40.963755] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.211  [2024-12-16 06:30:40.963833] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.211  [2024-12-16 06:30:40.963855] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.211  [2024-12-16 06:30:40.963858] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.963868] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.211  [2024-12-16 06:30:40.963876] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.963880] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.963884] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.211  [2024-12-16 06:30:40.963891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.211  [2024-12-16 06:30:40.963912] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.211  [2024-12-16 06:30:40.963995] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.211  [2024-12-16 06:30:40.964001] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.211  [2024-12-16 06:30:40.964004] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.964008] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.211  [2024-12-16 06:30:40.964013] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us
00:20:24.211  [2024-12-16 06:30:40.964018] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms
00:20:24.211  [2024-12-16 06:30:40.964027] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.964031] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.964035] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.211  [2024-12-16 06:30:40.964041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.211  [2024-12-16 06:30:40.964058] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.211  [2024-12-16 06:30:40.964118] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.211  [2024-12-16 06:30:40.964124] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.211  [2024-12-16 06:30:40.964127] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.964131] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.211  [2024-12-16 06:30:40.964142] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.964146] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.964149] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.211  [2024-12-16 06:30:40.964156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.211  [2024-12-16 06:30:40.964173] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.211  [2024-12-16 06:30:40.964236] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.211  [2024-12-16 06:30:40.964242] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.211  [2024-12-16 06:30:40.964245] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.964249] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.211  [2024-12-16 06:30:40.964258] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.964262] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.964266] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.211  [2024-12-16 06:30:40.964272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.211  [2024-12-16 06:30:40.964290] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.211  [2024-12-16 06:30:40.964351] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.211  [2024-12-16 06:30:40.964357] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.211  [2024-12-16 06:30:40.964360] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.964363] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.211  [2024-12-16 06:30:40.964373] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.964381] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.964385] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.211  [2024-12-16 06:30:40.964391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.211  [2024-12-16 06:30:40.964408] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.211  [2024-12-16 06:30:40.964480] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.211  [2024-12-16 06:30:40.964486] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.211  [2024-12-16 06:30:40.964490] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.964493] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.211  [2024-12-16 06:30:40.964509] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.964513] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.964516] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.211  [2024-12-16 06:30:40.964523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.211  [2024-12-16 06:30:40.964540] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.211  [2024-12-16 06:30:40.964606] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.211  [2024-12-16 06:30:40.964613] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.211  [2024-12-16 06:30:40.964617] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.964620] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.211  [2024-12-16 06:30:40.964630] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.964634] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.964637] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.211  [2024-12-16 06:30:40.964644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.211  [2024-12-16 06:30:40.964663] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.211  [2024-12-16 06:30:40.964721] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.211  [2024-12-16 06:30:40.964727] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.211  [2024-12-16 06:30:40.964730] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.964734] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.211  [2024-12-16 06:30:40.964743] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.964747] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.964750] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.211  [2024-12-16 06:30:40.964757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.211  [2024-12-16 06:30:40.964774] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.211  [2024-12-16 06:30:40.964838] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.211  [2024-12-16 06:30:40.964843] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.211  [2024-12-16 06:30:40.964847] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.964850] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.211  [2024-12-16 06:30:40.964859] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.964863] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.211  [2024-12-16 06:30:40.964867] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.211  [2024-12-16 06:30:40.964873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.211  [2024-12-16 06:30:40.964889] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.211  [2024-12-16 06:30:40.964963] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.211  [2024-12-16 06:30:40.964969] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.212  [2024-12-16 06:30:40.964972] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.964975] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.212  [2024-12-16 06:30:40.964985] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.964989] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.964992] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.212  [2024-12-16 06:30:40.964999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.212  [2024-12-16 06:30:40.965027] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.212  [2024-12-16 06:30:40.965091] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.212  [2024-12-16 06:30:40.965097] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.212  [2024-12-16 06:30:40.965100] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.965103] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.212  [2024-12-16 06:30:40.965113] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.965117] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.965120] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.212  [2024-12-16 06:30:40.965126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.212  [2024-12-16 06:30:40.965143] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.212  [2024-12-16 06:30:40.965202] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.212  [2024-12-16 06:30:40.965208] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.212  [2024-12-16 06:30:40.965211] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.965215] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.212  [2024-12-16 06:30:40.965224] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.965228] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.965231] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.212  [2024-12-16 06:30:40.965238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.212  [2024-12-16 06:30:40.965254] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.212  [2024-12-16 06:30:40.965334] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.212  [2024-12-16 06:30:40.965340] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.212  [2024-12-16 06:30:40.965343] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.965347] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.212  [2024-12-16 06:30:40.965357] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.965361] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.965364] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.212  [2024-12-16 06:30:40.965371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.212  [2024-12-16 06:30:40.965387] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.212  [2024-12-16 06:30:40.965447] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.212  [2024-12-16 06:30:40.965453] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.212  [2024-12-16 06:30:40.965456] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.965460] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.212  [2024-12-16 06:30:40.965470] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.965474] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.965477] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.212  [2024-12-16 06:30:40.965484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.212  [2024-12-16 06:30:40.965509] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.212  [2024-12-16 06:30:40.965589] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.212  [2024-12-16 06:30:40.965595] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.212  [2024-12-16 06:30:40.965598] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.965602] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.212  [2024-12-16 06:30:40.965612] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.965616] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.965619] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.212  [2024-12-16 06:30:40.965626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.212  [2024-12-16 06:30:40.965644] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.212  [2024-12-16 06:30:40.965716] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.212  [2024-12-16 06:30:40.965722] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.212  [2024-12-16 06:30:40.965725] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.965728] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.212  [2024-12-16 06:30:40.965739] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.965743] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.965746] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.212  [2024-12-16 06:30:40.965753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.212  [2024-12-16 06:30:40.965771] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.212  [2024-12-16 06:30:40.965827] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.212  [2024-12-16 06:30:40.965833] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.212  [2024-12-16 06:30:40.965836] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.965840] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.212  [2024-12-16 06:30:40.965849] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.965854] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.965857] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.212  [2024-12-16 06:30:40.965863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.212  [2024-12-16 06:30:40.965881] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.212  [2024-12-16 06:30:40.965956] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.212  [2024-12-16 06:30:40.965967] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.212  [2024-12-16 06:30:40.965971] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.965974] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.212  [2024-12-16 06:30:40.965985] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.965989] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.965992] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.212  [2024-12-16 06:30:40.965999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.212  [2024-12-16 06:30:40.966029] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.212  [2024-12-16 06:30:40.966090] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.212  [2024-12-16 06:30:40.966095] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.212  [2024-12-16 06:30:40.966099] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.966102] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.212  [2024-12-16 06:30:40.966112] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.966117] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.966120] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.212  [2024-12-16 06:30:40.966126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.212  [2024-12-16 06:30:40.966143] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.212  [2024-12-16 06:30:40.966204] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.212  [2024-12-16 06:30:40.966210] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.212  [2024-12-16 06:30:40.966213] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.966217] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.212  [2024-12-16 06:30:40.966226] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.966230] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.212  [2024-12-16 06:30:40.966234] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.212  [2024-12-16 06:30:40.966240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.212  [2024-12-16 06:30:40.966257] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.212  [2024-12-16 06:30:40.966325] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.213  [2024-12-16 06:30:40.966331] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.213  [2024-12-16 06:30:40.966334] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.966337] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.213  [2024-12-16 06:30:40.966347] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.966351] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.966355] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.213  [2024-12-16 06:30:40.966361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.213  [2024-12-16 06:30:40.966387] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.213  [2024-12-16 06:30:40.966468] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.213  [2024-12-16 06:30:40.966474] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.213  [2024-12-16 06:30:40.966477] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.966481] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.213  [2024-12-16 06:30:40.966492] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.966496] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.966499] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.213  [2024-12-16 06:30:40.966552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.213  [2024-12-16 06:30:40.966575] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.213  [2024-12-16 06:30:40.966634] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.213  [2024-12-16 06:30:40.966641] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.213  [2024-12-16 06:30:40.966644] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.966648] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.213  [2024-12-16 06:30:40.966659] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.966663] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.966667] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.213  [2024-12-16 06:30:40.966674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.213  [2024-12-16 06:30:40.966692] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.213  [2024-12-16 06:30:40.966770] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.213  [2024-12-16 06:30:40.966776] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.213  [2024-12-16 06:30:40.966779] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.966783] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.213  [2024-12-16 06:30:40.966803] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.966807] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.966825] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.213  [2024-12-16 06:30:40.966832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.213  [2024-12-16 06:30:40.966850] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.213  [2024-12-16 06:30:40.966916] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.213  [2024-12-16 06:30:40.966937] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.213  [2024-12-16 06:30:40.966940] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.966944] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.213  [2024-12-16 06:30:40.966953] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.966958] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.966961] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.213  [2024-12-16 06:30:40.966968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.213  [2024-12-16 06:30:40.966999] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.213  [2024-12-16 06:30:40.967074] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.213  [2024-12-16 06:30:40.967080] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.213  [2024-12-16 06:30:40.967084] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.967087] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.213  [2024-12-16 06:30:40.967097] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.967101] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.967104] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.213  [2024-12-16 06:30:40.967111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.213  [2024-12-16 06:30:40.967128] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.213  [2024-12-16 06:30:40.967185] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.213  [2024-12-16 06:30:40.967191] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.213  [2024-12-16 06:30:40.967195] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.967198] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.213  [2024-12-16 06:30:40.967208] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.967212] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.967215] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.213  [2024-12-16 06:30:40.967222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.213  [2024-12-16 06:30:40.967251] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.213  [2024-12-16 06:30:40.967314] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.213  [2024-12-16 06:30:40.967320] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.213  [2024-12-16 06:30:40.967323] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.967327] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.213  [2024-12-16 06:30:40.967337] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.967341] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.967345] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.213  [2024-12-16 06:30:40.967351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.213  [2024-12-16 06:30:40.967377] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.213  [2024-12-16 06:30:40.967431] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.213  [2024-12-16 06:30:40.967437] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.213  [2024-12-16 06:30:40.967440] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.967444] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.213  [2024-12-16 06:30:40.967454] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.967458] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.967461] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.213  [2024-12-16 06:30:40.967468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.213  [2024-12-16 06:30:40.967485] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.213  [2024-12-16 06:30:40.967543] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.213  [2024-12-16 06:30:40.967549] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.213  [2024-12-16 06:30:40.967552] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.967556] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.213  [2024-12-16 06:30:40.967566] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.971536] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.971542] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c56d30)
00:20:24.213  [2024-12-16 06:30:40.971550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.213  [2024-12-16 06:30:40.971576] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb5350, cid 3, qid 0
00:20:24.213  [2024-12-16 06:30:40.971640] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.213  [2024-12-16 06:30:40.971646] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.213  [2024-12-16 06:30:40.971650] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.213  [2024-12-16 06:30:40.971653] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cb5350) on tqpair=0x1c56d30
00:20:24.213  [2024-12-16 06:30:40.971661] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds
00:20:24.213                                                                                                                                                                                                                              
00:20:24.213   06:30:40	-- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r '        trtype:tcp         adrfam:IPv4         traddr:10.0.0.2         trsvcid:4420         subnqn:nqn.2016-06.io.spdk:cnode1' -L all
00:20:24.214  [2024-12-16 06:30:41.005286] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:20:24.214  [2024-12-16 06:30:41.005530] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82788 ]
00:20:24.214  [2024-12-16 06:30:41.140249] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout)
00:20:24.214  [2024-12-16 06:30:41.140311] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2
00:20:24.214  [2024-12-16 06:30:41.140317] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420
00:20:24.214  [2024-12-16 06:30:41.140325] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null)
00:20:24.214  [2024-12-16 06:30:41.140332] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix
00:20:24.214  [2024-12-16 06:30:41.140418] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout)
00:20:24.214  [2024-12-16 06:30:41.140458] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x861d30 0
00:20:24.214  [2024-12-16 06:30:41.147577] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1
00:20:24.214  [2024-12-16 06:30:41.147597] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1
00:20:24.214  [2024-12-16 06:30:41.147618] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0
00:20:24.214  [2024-12-16 06:30:41.147621] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0
00:20:24.214  [2024-12-16 06:30:41.147655] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.214  [2024-12-16 06:30:41.147661] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.214  [2024-12-16 06:30:41.147665] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x861d30)
00:20:24.214  [2024-12-16 06:30:41.147675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400
00:20:24.214  [2024-12-16 06:30:41.147702] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bff30, cid 0, qid 0
00:20:24.214  [2024-12-16 06:30:41.159544] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.214  [2024-12-16 06:30:41.159564] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.214  [2024-12-16 06:30:41.159585] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.214  [2024-12-16 06:30:41.159589] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bff30) on tqpair=0x861d30
00:20:24.214  [2024-12-16 06:30:41.159598] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001
00:20:24.214  [2024-12-16 06:30:41.159605] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout)
00:20:24.214  [2024-12-16 06:30:41.159611] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout)
00:20:24.214  [2024-12-16 06:30:41.159624] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.214  [2024-12-16 06:30:41.159628] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.214  [2024-12-16 06:30:41.159631] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x861d30)
00:20:24.214  [2024-12-16 06:30:41.159639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.214  [2024-12-16 06:30:41.159665] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bff30, cid 0, qid 0
00:20:24.214  [2024-12-16 06:30:41.159741] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.214  [2024-12-16 06:30:41.159747] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.214  [2024-12-16 06:30:41.159750] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.214  [2024-12-16 06:30:41.159754] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bff30) on tqpair=0x861d30
00:20:24.214  [2024-12-16 06:30:41.159759] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout)
00:20:24.214  [2024-12-16 06:30:41.159765] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout)
00:20:24.214  [2024-12-16 06:30:41.159787] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.214  [2024-12-16 06:30:41.159806] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.214  [2024-12-16 06:30:41.159809] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x861d30)
00:20:24.214  [2024-12-16 06:30:41.159817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.214  [2024-12-16 06:30:41.159835] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bff30, cid 0, qid 0
00:20:24.214  [2024-12-16 06:30:41.159898] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.214  [2024-12-16 06:30:41.159904] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.214  [2024-12-16 06:30:41.159907] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.214  [2024-12-16 06:30:41.159911] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bff30) on tqpair=0x861d30
00:20:24.214  [2024-12-16 06:30:41.159916] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout)
00:20:24.214  [2024-12-16 06:30:41.159924] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms)
00:20:24.214  [2024-12-16 06:30:41.159930] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.214  [2024-12-16 06:30:41.159934] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.214  [2024-12-16 06:30:41.159938] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x861d30)
00:20:24.214  [2024-12-16 06:30:41.159944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.214  [2024-12-16 06:30:41.159962] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bff30, cid 0, qid 0
00:20:24.214  [2024-12-16 06:30:41.160023] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.214  [2024-12-16 06:30:41.160029] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.214  [2024-12-16 06:30:41.160032] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.214  [2024-12-16 06:30:41.160035] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bff30) on tqpair=0x861d30
00:20:24.214  [2024-12-16 06:30:41.160040] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms)
00:20:24.214  [2024-12-16 06:30:41.160050] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.214  [2024-12-16 06:30:41.160054] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.214  [2024-12-16 06:30:41.160057] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x861d30)
00:20:24.214  [2024-12-16 06:30:41.160064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.214  [2024-12-16 06:30:41.160081] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bff30, cid 0, qid 0
00:20:24.214  [2024-12-16 06:30:41.160153] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.214  [2024-12-16 06:30:41.160159] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.214  [2024-12-16 06:30:41.160162] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.214  [2024-12-16 06:30:41.160166] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bff30) on tqpair=0x861d30
00:20:24.214  [2024-12-16 06:30:41.160170] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0
00:20:24.214  [2024-12-16 06:30:41.160175] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms)
00:20:24.214  [2024-12-16 06:30:41.160182] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms)
00:20:24.214  [2024-12-16 06:30:41.160287] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1
00:20:24.214  [2024-12-16 06:30:41.160292] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms)
00:20:24.214  [2024-12-16 06:30:41.160300] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.214  [2024-12-16 06:30:41.160304] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.214  [2024-12-16 06:30:41.160307] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x861d30)
00:20:24.214  [2024-12-16 06:30:41.160314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.214  [2024-12-16 06:30:41.160331] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bff30, cid 0, qid 0
00:20:24.214  [2024-12-16 06:30:41.160404] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.214  [2024-12-16 06:30:41.160410] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.214  [2024-12-16 06:30:41.160413] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.214  [2024-12-16 06:30:41.160417] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bff30) on tqpair=0x861d30
00:20:24.214  [2024-12-16 06:30:41.160422] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms)
00:20:24.214  [2024-12-16 06:30:41.160431] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.214  [2024-12-16 06:30:41.160435] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.214  [2024-12-16 06:30:41.160438] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x861d30)
00:20:24.214  [2024-12-16 06:30:41.160444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.214  [2024-12-16 06:30:41.160461] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bff30, cid 0, qid 0
00:20:24.215  [2024-12-16 06:30:41.160542] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.215  [2024-12-16 06:30:41.160564] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.215  [2024-12-16 06:30:41.160567] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.215  [2024-12-16 06:30:41.160570] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bff30) on tqpair=0x861d30
00:20:24.215  [2024-12-16 06:30:41.160575] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready
00:20:24.215  [2024-12-16 06:30:41.160579] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms)
00:20:24.215  [2024-12-16 06:30:41.160587] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout)
00:20:24.215  [2024-12-16 06:30:41.160599] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms)
00:20:24.215  [2024-12-16 06:30:41.160608] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.215  [2024-12-16 06:30:41.160612] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.215  [2024-12-16 06:30:41.160615] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x861d30)
00:20:24.215  [2024-12-16 06:30:41.160622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.215  [2024-12-16 06:30:41.160642] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bff30, cid 0, qid 0
00:20:24.215  [2024-12-16 06:30:41.160762] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:20:24.215  [2024-12-16 06:30:41.160769] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:20:24.215  [2024-12-16 06:30:41.160772] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:20:24.215  [2024-12-16 06:30:41.160776] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x861d30): datao=0, datal=4096, cccid=0
00:20:24.215  [2024-12-16 06:30:41.160780] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8bff30) on tqpair(0x861d30): expected_datao=0, payload_size=4096
00:20:24.215  [2024-12-16 06:30:41.160787] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:20:24.215  [2024-12-16 06:30:41.160791] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:20:24.215  [2024-12-16 06:30:41.160799] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.215  [2024-12-16 06:30:41.160804] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.215  [2024-12-16 06:30:41.160807] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.215  [2024-12-16 06:30:41.160811] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bff30) on tqpair=0x861d30
00:20:24.215  [2024-12-16 06:30:41.160818] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295
00:20:24.215  [2024-12-16 06:30:41.160823] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072
00:20:24.215  [2024-12-16 06:30:41.160827] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001
00:20:24.215  [2024-12-16 06:30:41.160831] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16
00:20:24.215  [2024-12-16 06:30:41.160835] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1
00:20:24.215  [2024-12-16 06:30:41.160839] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms)
00:20:24.215  [2024-12-16 06:30:41.160851] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms)
00:20:24.215  [2024-12-16 06:30:41.160859] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.215  [2024-12-16 06:30:41.160863] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.215  [2024-12-16 06:30:41.160866] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x861d30)
00:20:24.215  [2024-12-16 06:30:41.160873] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0
00:20:24.215  [2024-12-16 06:30:41.160892] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bff30, cid 0, qid 0
00:20:24.215  [2024-12-16 06:30:41.160958] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.215  [2024-12-16 06:30:41.160964] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.215  [2024-12-16 06:30:41.160967] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.215  [2024-12-16 06:30:41.160970] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bff30) on tqpair=0x861d30
00:20:24.215  [2024-12-16 06:30:41.160977] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.215  [2024-12-16 06:30:41.160981] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.215  [2024-12-16 06:30:41.160984] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x861d30)
00:20:24.215  [2024-12-16 06:30:41.160990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:24.215  [2024-12-16 06:30:41.160996] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.215  [2024-12-16 06:30:41.160999] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.215  [2024-12-16 06:30:41.161003] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x861d30)
00:20:24.215  [2024-12-16 06:30:41.161008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:24.215  [2024-12-16 06:30:41.161014] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.215  [2024-12-16 06:30:41.161018] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.215  [2024-12-16 06:30:41.161021] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x861d30)
00:20:24.215  [2024-12-16 06:30:41.161026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:24.215  [2024-12-16 06:30:41.161032] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.215  [2024-12-16 06:30:41.161035] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.215  [2024-12-16 06:30:41.161038] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.215  [2024-12-16 06:30:41.161044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:24.215  [2024-12-16 06:30:41.161048] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms)
00:20:24.215  [2024-12-16 06:30:41.161060] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms)
00:20:24.215  [2024-12-16 06:30:41.161067] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.215  [2024-12-16 06:30:41.161070] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.215  [2024-12-16 06:30:41.161074] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x861d30)
00:20:24.215  [2024-12-16 06:30:41.161080] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.215  [2024-12-16 06:30:41.161100] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bff30, cid 0, qid 0
00:20:24.215  [2024-12-16 06:30:41.161106] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0090, cid 1, qid 0
00:20:24.215  [2024-12-16 06:30:41.161110] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c01f0, cid 2, qid 0
00:20:24.215  [2024-12-16 06:30:41.161115] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.215  [2024-12-16 06:30:41.161119] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c04b0, cid 4, qid 0
00:20:24.215  [2024-12-16 06:30:41.161228] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.215  [2024-12-16 06:30:41.161234] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.215  [2024-12-16 06:30:41.161237] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.215  [2024-12-16 06:30:41.161241] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c04b0) on tqpair=0x861d30
00:20:24.215  [2024-12-16 06:30:41.161246] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us
00:20:24.215  [2024-12-16 06:30:41.161250] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms)
00:20:24.215  [2024-12-16 06:30:41.161258] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms)
00:20:24.216  [2024-12-16 06:30:41.161267] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms)
00:20:24.216  [2024-12-16 06:30:41.161274] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.161278] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.161281] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x861d30)
00:20:24.216  [2024-12-16 06:30:41.161288] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:20:24.216  [2024-12-16 06:30:41.161305] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c04b0, cid 4, qid 0
00:20:24.216  [2024-12-16 06:30:41.161370] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.216  [2024-12-16 06:30:41.161376] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.216  [2024-12-16 06:30:41.161379] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.161383] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c04b0) on tqpair=0x861d30
00:20:24.216  [2024-12-16 06:30:41.161438] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms)
00:20:24.216  [2024-12-16 06:30:41.161448] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms)
00:20:24.216  [2024-12-16 06:30:41.161458] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.161462] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.161465] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x861d30)
00:20:24.216  [2024-12-16 06:30:41.161472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.216  [2024-12-16 06:30:41.161489] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c04b0, cid 4, qid 0
00:20:24.216  [2024-12-16 06:30:41.161585] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:20:24.216  [2024-12-16 06:30:41.161592] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:20:24.216  [2024-12-16 06:30:41.161596] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.161599] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x861d30): datao=0, datal=4096, cccid=4
00:20:24.216  [2024-12-16 06:30:41.161604] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c04b0) on tqpair(0x861d30): expected_datao=0, payload_size=4096
00:20:24.216  [2024-12-16 06:30:41.161611] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.161615] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.161622] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.216  [2024-12-16 06:30:41.161628] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.216  [2024-12-16 06:30:41.161631] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.161635] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c04b0) on tqpair=0x861d30
00:20:24.216  [2024-12-16 06:30:41.161649] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added
00:20:24.216  [2024-12-16 06:30:41.161659] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms)
00:20:24.216  [2024-12-16 06:30:41.161669] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms)
00:20:24.216  [2024-12-16 06:30:41.161676] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.161680] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.161683] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x861d30)
00:20:24.216  [2024-12-16 06:30:41.161690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.216  [2024-12-16 06:30:41.161711] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c04b0, cid 4, qid 0
00:20:24.216  [2024-12-16 06:30:41.161805] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:20:24.216  [2024-12-16 06:30:41.161811] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:20:24.216  [2024-12-16 06:30:41.161815] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.161818] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x861d30): datao=0, datal=4096, cccid=4
00:20:24.216  [2024-12-16 06:30:41.161823] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c04b0) on tqpair(0x861d30): expected_datao=0, payload_size=4096
00:20:24.216  [2024-12-16 06:30:41.161830] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.161833] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.161841] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.216  [2024-12-16 06:30:41.161847] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.216  [2024-12-16 06:30:41.161850] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.161853] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c04b0) on tqpair=0x861d30
00:20:24.216  [2024-12-16 06:30:41.161867] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms)
00:20:24.216  [2024-12-16 06:30:41.161877] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms)
00:20:24.216  [2024-12-16 06:30:41.161884] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.161888] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.161891] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x861d30)
00:20:24.216  [2024-12-16 06:30:41.161898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.216  [2024-12-16 06:30:41.161917] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c04b0, cid 4, qid 0
00:20:24.216  [2024-12-16 06:30:41.162000] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:20:24.216  [2024-12-16 06:30:41.162006] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:20:24.216  [2024-12-16 06:30:41.162009] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.162013] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x861d30): datao=0, datal=4096, cccid=4
00:20:24.216  [2024-12-16 06:30:41.162017] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c04b0) on tqpair(0x861d30): expected_datao=0, payload_size=4096
00:20:24.216  [2024-12-16 06:30:41.162024] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.162027] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.162038] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.216  [2024-12-16 06:30:41.162043] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.216  [2024-12-16 06:30:41.162047] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.162050] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c04b0) on tqpair=0x861d30
00:20:24.216  [2024-12-16 06:30:41.162058] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms)
00:20:24.216  [2024-12-16 06:30:41.162066] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms)
00:20:24.216  [2024-12-16 06:30:41.162075] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms)
00:20:24.216  [2024-12-16 06:30:41.162082] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms)
00:20:24.216  [2024-12-16 06:30:41.162086] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms)
00:20:24.216  [2024-12-16 06:30:41.162091] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID
00:20:24.216  [2024-12-16 06:30:41.162095] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms)
00:20:24.216  [2024-12-16 06:30:41.162111] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout)
00:20:24.216  [2024-12-16 06:30:41.162124] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.162128] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.162131] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x861d30)
00:20:24.216  [2024-12-16 06:30:41.162138] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.216  [2024-12-16 06:30:41.162145] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.162149] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.162152] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x861d30)
00:20:24.216  [2024-12-16 06:30:41.162158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 
00:20:24.216  [2024-12-16 06:30:41.162180] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c04b0, cid 4, qid 0
00:20:24.216  [2024-12-16 06:30:41.162187] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0610, cid 5, qid 0
00:20:24.216  [2024-12-16 06:30:41.162267] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.216  [2024-12-16 06:30:41.162273] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.216  [2024-12-16 06:30:41.162277] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.162280] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c04b0) on tqpair=0x861d30
00:20:24.216  [2024-12-16 06:30:41.162287] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.216  [2024-12-16 06:30:41.162292] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.216  [2024-12-16 06:30:41.162295] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.216  [2024-12-16 06:30:41.162298] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0610) on tqpair=0x861d30
00:20:24.217  [2024-12-16 06:30:41.162308] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.162311] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.162315] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x861d30)
00:20:24.217  [2024-12-16 06:30:41.162321] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.217  [2024-12-16 06:30:41.162338] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0610, cid 5, qid 0
00:20:24.217  [2024-12-16 06:30:41.162451] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.217  [2024-12-16 06:30:41.162460] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.217  [2024-12-16 06:30:41.162463] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.162467] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0610) on tqpair=0x861d30
00:20:24.217  [2024-12-16 06:30:41.162478] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.162483] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.162487] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x861d30)
00:20:24.217  [2024-12-16 06:30:41.162494] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.217  [2024-12-16 06:30:41.162525] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0610, cid 5, qid 0
00:20:24.217  [2024-12-16 06:30:41.162592] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.217  [2024-12-16 06:30:41.162599] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.217  [2024-12-16 06:30:41.162603] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.162607] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0610) on tqpair=0x861d30
00:20:24.217  [2024-12-16 06:30:41.162619] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.162623] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.162627] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x861d30)
00:20:24.217  [2024-12-16 06:30:41.162634] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.217  [2024-12-16 06:30:41.162654] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0610, cid 5, qid 0
00:20:24.217  [2024-12-16 06:30:41.162737] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.217  [2024-12-16 06:30:41.162743] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.217  [2024-12-16 06:30:41.162746] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.162750] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0610) on tqpair=0x861d30
00:20:24.217  [2024-12-16 06:30:41.162762] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.162767] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.162770] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x861d30)
00:20:24.217  [2024-12-16 06:30:41.162777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.217  [2024-12-16 06:30:41.162784] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.162787] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.162791] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x861d30)
00:20:24.217  [2024-12-16 06:30:41.162797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.217  [2024-12-16 06:30:41.162818] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.162822] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.162825] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x861d30)
00:20:24.217  [2024-12-16 06:30:41.162831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.217  [2024-12-16 06:30:41.162838] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.162841] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.162845] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x861d30)
00:20:24.217  [2024-12-16 06:30:41.162850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.217  [2024-12-16 06:30:41.162869] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0610, cid 5, qid 0
00:20:24.217  [2024-12-16 06:30:41.162875] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c04b0, cid 4, qid 0
00:20:24.217  [2024-12-16 06:30:41.162879] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0770, cid 6, qid 0
00:20:24.217  [2024-12-16 06:30:41.162883] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c08d0, cid 7, qid 0
00:20:24.217  [2024-12-16 06:30:41.163042] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:20:24.217  [2024-12-16 06:30:41.163048] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:20:24.217  [2024-12-16 06:30:41.163052] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.163055] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x861d30): datao=0, datal=8192, cccid=5
00:20:24.217  [2024-12-16 06:30:41.163059] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c0610) on tqpair(0x861d30): expected_datao=0, payload_size=8192
00:20:24.217  [2024-12-16 06:30:41.163075] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.163079] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.163084] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:20:24.217  [2024-12-16 06:30:41.163090] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:20:24.217  [2024-12-16 06:30:41.163093] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.163096] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x861d30): datao=0, datal=512, cccid=4
00:20:24.217  [2024-12-16 06:30:41.163101] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c04b0) on tqpair(0x861d30): expected_datao=0, payload_size=512
00:20:24.217  [2024-12-16 06:30:41.163107] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.163110] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.163115] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:20:24.217  [2024-12-16 06:30:41.163120] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:20:24.217  [2024-12-16 06:30:41.163124] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.163127] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x861d30): datao=0, datal=512, cccid=6
00:20:24.217  [2024-12-16 06:30:41.163131] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c0770) on tqpair(0x861d30): expected_datao=0, payload_size=512
00:20:24.217  [2024-12-16 06:30:41.163137] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.163141] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.163146] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:20:24.217  [2024-12-16 06:30:41.163151] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:20:24.217  [2024-12-16 06:30:41.163154] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.163157] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x861d30): datao=0, datal=4096, cccid=7
00:20:24.217  [2024-12-16 06:30:41.163161] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c08d0) on tqpair(0x861d30): expected_datao=0, payload_size=4096
00:20:24.217  [2024-12-16 06:30:41.163167] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.163171] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.163178] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.217  [2024-12-16 06:30:41.163184] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.217  [2024-12-16 06:30:41.163187] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.163190] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0610) on tqpair=0x861d30
00:20:24.217  [2024-12-16 06:30:41.163206] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.217  [2024-12-16 06:30:41.163213] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.217  [2024-12-16 06:30:41.163216] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.163219] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c04b0) on tqpair=0x861d30
00:20:24.217  [2024-12-16 06:30:41.163228] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.217  [2024-12-16 06:30:41.163234] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.217  [2024-12-16 06:30:41.163237] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.217  [2024-12-16 06:30:41.163240] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0770) on tqpair=0x861d30
00:20:24.217  [2024-12-16 06:30:41.163247] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.217  =====================================================
00:20:24.217  NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:20:24.217  =====================================================
00:20:24.217  Controller Capabilities/Features
00:20:24.217  ================================
00:20:24.217  Vendor ID:                             8086
00:20:24.217  Subsystem Vendor ID:                   8086
00:20:24.217  Serial Number:                         SPDK00000000000001
00:20:24.217  Model Number:                          SPDK bdev Controller
00:20:24.217  Firmware Version:                      24.01.1
00:20:24.218  Recommended Arb Burst:                 6
00:20:24.218  IEEE OUI Identifier:                   e4 d2 5c
00:20:24.218  Multi-path I/O
00:20:24.218    May have multiple subsystem ports:   Yes
00:20:24.218    May have multiple controllers:       Yes
00:20:24.218    Associated with SR-IOV VF:           No
00:20:24.218  Max Data Transfer Size:                131072
00:20:24.218  Max Number of Namespaces:              32
00:20:24.218  Max Number of I/O Queues:              127
00:20:24.218  NVMe Specification Version (VS):       1.3
00:20:24.218  NVMe Specification Version (Identify): 1.3
00:20:24.218  Maximum Queue Entries:                 128
00:20:24.218  Contiguous Queues Required:            Yes
00:20:24.218  Arbitration Mechanisms Supported
00:20:24.218    Weighted Round Robin:                Not Supported
00:20:24.218    Vendor Specific:                     Not Supported
00:20:24.218  Reset Timeout:                         15000 ms
00:20:24.218  Doorbell Stride:                       4 bytes
00:20:24.218  NVM Subsystem Reset:                   Not Supported
00:20:24.218  Command Sets Supported
00:20:24.218    NVM Command Set:                     Supported
00:20:24.218  Boot Partition:                        Not Supported
00:20:24.218  Memory Page Size Minimum:              4096 bytes
00:20:24.218  Memory Page Size Maximum:              4096 bytes
00:20:24.218  Persistent Memory Region:              Not Supported
00:20:24.218  Optional Asynchronous Events Supported
00:20:24.218    Namespace Attribute Notices:         Supported
00:20:24.218    Firmware Activation Notices:         Not Supported
00:20:24.218    ANA Change Notices:                  Not Supported
00:20:24.218    PLE Aggregate Log Change Notices:    Not Supported
00:20:24.218    LBA Status Info Alert Notices:       Not Supported
00:20:24.218    EGE Aggregate Log Change Notices:    Not Supported
00:20:24.218    Normal NVM Subsystem Shutdown event: Not Supported
00:20:24.218    Zone Descriptor Change Notices:      Not Supported
00:20:24.218    Discovery Log Change Notices:        Not Supported
00:20:24.218  Controller Attributes
00:20:24.218    128-bit Host Identifier:             Supported
00:20:24.218    Non-Operational Permissive Mode:     Not Supported
00:20:24.218    NVM Sets:                            Not Supported
00:20:24.218    Read Recovery Levels:                Not Supported
00:20:24.218    Endurance Groups:                    Not Supported
00:20:24.218    Predictable Latency Mode:            Not Supported
00:20:24.218    Traffic Based Keep ALive:            Not Supported
00:20:24.218    Namespace Granularity:               Not Supported
00:20:24.218    SQ Associations:                     Not Supported
00:20:24.218    UUID List:                           Not Supported
00:20:24.218    Multi-Domain Subsystem:              Not Supported
00:20:24.218    Fixed Capacity Management:           Not Supported
00:20:24.218    Variable Capacity Management:        Not Supported
00:20:24.218    Delete Endurance Group:              Not Supported
00:20:24.218    Delete NVM Set:                      Not Supported
00:20:24.218    Extended LBA Formats Supported:      Not Supported
00:20:24.218    Flexible Data Placement Supported:   Not Supported
00:20:24.218  
00:20:24.218  Controller Memory Buffer Support
00:20:24.218  ================================
00:20:24.218  Supported:                             No
00:20:24.218  
00:20:24.218  Persistent Memory Region Support
00:20:24.218  ================================
00:20:24.218  Supported:                             No
00:20:24.218  
00:20:24.218  Admin Command Set Attributes
00:20:24.218  ============================
00:20:24.218  Security Send/Receive:                 Not Supported
00:20:24.218  Format NVM:                            Not Supported
00:20:24.218  Firmware Activate/Download:            Not Supported
00:20:24.218  Namespace Management:                  Not Supported
00:20:24.218  Device Self-Test:                      Not Supported
00:20:24.218  Directives:                            Not Supported
00:20:24.218  NVMe-MI:                               Not Supported
00:20:24.218  Virtualization Management:             Not Supported
00:20:24.218  Doorbell Buffer Config:                Not Supported
00:20:24.218  Get LBA Status Capability:             Not Supported
00:20:24.218  Command & Feature Lockdown Capability: Not Supported
00:20:24.218  Abort Command Limit:                   4
00:20:24.218  Async Event Request Limit:             4
00:20:24.218  Number of Firmware Slots:              N/A
00:20:24.218  Firmware Slot 1 Read-Only:             N/A
00:20:24.218  Firmware Activation Without Reset:     N/A
00:20:24.218  Multiple Update Detection Support:     N/A
00:20:24.218  Firmware Update Granularity:           No Information Provided
00:20:24.218  Per-Namespace SMART Log:               No
00:20:24.218  Asymmetric Namespace Access Log Page:  Not Supported
00:20:24.218  Subsystem NQN:                         nqn.2016-06.io.spdk:cnode1
00:20:24.218  Command Effects Log Page:              Supported
00:20:24.218  Get Log Page Extended Data:            Supported
00:20:24.218  Telemetry Log Pages:                   Not Supported
00:20:24.218  Persistent Event Log Pages:            Not Supported
00:20:24.218  Supported Log Pages Log Page:          May Support
00:20:24.218  Commands Supported & Effects Log Page: Not Supported
00:20:24.218  Feature Identifiers & Effects Log Page:May Support
00:20:24.218  NVMe-MI Commands & Effects Log Page:   May Support
00:20:24.218  Data Area 4 for Telemetry Log:         Not Supported
00:20:24.218  Error Log Page Entries Supported:      128
00:20:24.218  Keep Alive:                            Supported
00:20:24.218  Keep Alive Granularity:                10000 ms
00:20:24.218  
00:20:24.218  NVM Command Set Attributes
00:20:24.218  ==========================
00:20:24.218  Submission Queue Entry Size
00:20:24.218    Max:                       64
00:20:24.218    Min:                       64
00:20:24.218  Completion Queue Entry Size
00:20:24.218    Max:                       16
00:20:24.218    Min:                       16
00:20:24.218  Number of Namespaces:        32
00:20:24.218  Compare Command:             Supported
00:20:24.218  Write Uncorrectable Command: Not Supported
00:20:24.218  Dataset Management Command:  Supported
00:20:24.218  Write Zeroes Command:        Supported
00:20:24.218  Set Features Save Field:     Not Supported
00:20:24.218  Reservations:                Supported
00:20:24.218  Timestamp:                   Not Supported
00:20:24.218  Copy:                        Supported
00:20:24.218  Volatile Write Cache:        Present
00:20:24.218  Atomic Write Unit (Normal):  1
00:20:24.218  Atomic Write Unit (PFail):   1
00:20:24.218  Atomic Compare & Write Unit: 1
00:20:24.218  Fused Compare & Write:       Supported
00:20:24.218  Scatter-Gather List
00:20:24.218    SGL Command Set:           Supported
00:20:24.218    SGL Keyed:                 Supported
00:20:24.218    SGL Bit Bucket Descriptor: Not Supported
00:20:24.218    SGL Metadata Pointer:      Not Supported
00:20:24.218    Oversized SGL:             Not Supported
00:20:24.218    SGL Metadata Address:      Not Supported
00:20:24.218    SGL Offset:                Supported
00:20:24.218    Transport SGL Data Block:  Not Supported
00:20:24.218  Replay Protected Memory Block:  Not Supported
00:20:24.218  
00:20:24.218  Firmware Slot Information
00:20:24.218  =========================
00:20:24.218  Active slot:                 1
00:20:24.218  Slot 1 Firmware Revision:    24.01.1
00:20:24.218  
00:20:24.218  
00:20:24.218  Commands Supported and Effects
00:20:24.218  ==============================
00:20:24.218  Admin Commands
00:20:24.218  --------------
00:20:24.218                    Get Log Page (02h): Supported 
00:20:24.218                        Identify (06h): Supported 
00:20:24.218                           Abort (08h): Supported 
00:20:24.218                    Set Features (09h): Supported 
00:20:24.218                    Get Features (0Ah): Supported 
00:20:24.218      Asynchronous Event Request (0Ch): Supported 
00:20:24.218                      Keep Alive (18h): Supported 
00:20:24.218  I/O Commands
00:20:24.218  ------------
00:20:24.218                           Flush (00h): Supported LBA-Change 
00:20:24.218                           Write (01h): Supported LBA-Change 
00:20:24.218                            Read (02h): Supported 
00:20:24.218                         Compare (05h): Supported 
00:20:24.218                    Write Zeroes (08h): Supported LBA-Change 
00:20:24.218              Dataset Management (09h): Supported LBA-Change 
00:20:24.218                            Copy (19h): Supported LBA-Change 
00:20:24.218                         Unknown (79h): Supported LBA-Change 
00:20:24.218                         Unknown (7Ah): Supported 
00:20:24.218  
00:20:24.218  Error Log
00:20:24.218  =========
00:20:24.218  
00:20:24.218  Arbitration
00:20:24.218  ===========
00:20:24.218  Arbitration Burst:           1
00:20:24.218  
00:20:24.218  Power Management
00:20:24.218  ================
00:20:24.218  Number of Power States:          1
00:20:24.218  Current Power State:             Power State #0
00:20:24.218  Power State #0:
00:20:24.218    Max Power:                      0.00 W
00:20:24.218    Non-Operational State:         Operational
00:20:24.218    Entry Latency:                 Not Reported
00:20:24.218    Exit Latency:                  Not Reported
00:20:24.219    Relative Read Throughput:      0
00:20:24.219    Relative Read Latency:         0
00:20:24.219    Relative Write Throughput:     0
00:20:24.219    Relative Write Latency:        0
00:20:24.219    Idle Power:                     Not Reported
00:20:24.219    Active Power:                   Not Reported
00:20:24.219  Non-Operational Permissive Mode: Not Supported
00:20:24.219  
00:20:24.219  Health Information
00:20:24.219  ==================
00:20:24.219  Critical Warnings:
00:20:24.219    Available Spare Space:     OK
00:20:24.219    Temperature:               OK
00:20:24.219    Device Reliability:        OK
00:20:24.219    Read Only:                 No
00:20:24.219    Volatile Memory Backup:    OK
00:20:24.219  Current Temperature:         0 Kelvin (-273 Celsius)
00:20:24.219  Temperature Threshold:   [2024-12-16 06:30:41.163252] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.219  [2024-12-16 06:30:41.163256] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.219  [2024-12-16 06:30:41.163259] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c08d0) on tqpair=0x861d30
00:20:24.219  [2024-12-16 06:30:41.163350] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.219  [2024-12-16 06:30:41.163357] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.219  [2024-12-16 06:30:41.163360] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x861d30)
00:20:24.219  [2024-12-16 06:30:41.163367] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.219  [2024-12-16 06:30:41.163388] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c08d0, cid 7, qid 0
00:20:24.219  [2024-12-16 06:30:41.163464] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.219  [2024-12-16 06:30:41.163471] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.219  [2024-12-16 06:30:41.163474] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.219  [2024-12-16 06:30:41.163478] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c08d0) on tqpair=0x861d30
00:20:24.219  [2024-12-16 06:30:41.163509] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD
00:20:24.219  [2024-12-16 06:30:41.163537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:24.219  [2024-12-16 06:30:41.163544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:24.219  [2024-12-16 06:30:41.163550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:24.219  [2024-12-16 06:30:41.163556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:24.219  [2024-12-16 06:30:41.163564] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.219  [2024-12-16 06:30:41.163568] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.219  [2024-12-16 06:30:41.163571] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.219  [2024-12-16 06:30:41.163578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.219  [2024-12-16 06:30:41.163600] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.219  [2024-12-16 06:30:41.163660] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.219  [2024-12-16 06:30:41.163667] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.219  [2024-12-16 06:30:41.163670] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.219  [2024-12-16 06:30:41.163673] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.219  [2024-12-16 06:30:41.163680] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.219  [2024-12-16 06:30:41.163684] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.219  [2024-12-16 06:30:41.163688] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.219  [2024-12-16 06:30:41.163694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.219  [2024-12-16 06:30:41.163714] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.219  [2024-12-16 06:30:41.163799] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.219  [2024-12-16 06:30:41.163805] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.219  [2024-12-16 06:30:41.163809] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.219  [2024-12-16 06:30:41.163812] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.219  [2024-12-16 06:30:41.163817] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us
00:20:24.219  [2024-12-16 06:30:41.163821] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms
00:20:24.219  [2024-12-16 06:30:41.163830] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.219  [2024-12-16 06:30:41.163834] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.219  [2024-12-16 06:30:41.163837] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.219  [2024-12-16 06:30:41.163844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.219  [2024-12-16 06:30:41.163860] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.219  [2024-12-16 06:30:41.163927] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.219  [2024-12-16 06:30:41.163933] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.219  [2024-12-16 06:30:41.163937] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.219  [2024-12-16 06:30:41.163940] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.219  [2024-12-16 06:30:41.163949] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.219  [2024-12-16 06:30:41.163953] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.219  [2024-12-16 06:30:41.163957] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.219  [2024-12-16 06:30:41.163963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.219  [2024-12-16 06:30:41.163980] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.219  [2024-12-16 06:30:41.164048] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.219  [2024-12-16 06:30:41.164054] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.219  [2024-12-16 06:30:41.164057] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.219  [2024-12-16 06:30:41.164061] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.219  [2024-12-16 06:30:41.164070] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.219  [2024-12-16 06:30:41.164074] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.219  [2024-12-16 06:30:41.164078] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.219  [2024-12-16 06:30:41.164084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.219  [2024-12-16 06:30:41.164100] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.219  [2024-12-16 06:30:41.164162] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.219  [2024-12-16 06:30:41.164168] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.219  [2024-12-16 06:30:41.164171] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.219  [2024-12-16 06:30:41.164175] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.219  [2024-12-16 06:30:41.164184] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.219  [2024-12-16 06:30:41.164188] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.219  [2024-12-16 06:30:41.164191] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.219  [2024-12-16 06:30:41.164198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.219  [2024-12-16 06:30:41.164214] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.219  [2024-12-16 06:30:41.164284] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.219  [2024-12-16 06:30:41.164290] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.219  [2024-12-16 06:30:41.164293] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.219  [2024-12-16 06:30:41.164297] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.219  [2024-12-16 06:30:41.164306] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.219  [2024-12-16 06:30:41.164310] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.219  [2024-12-16 06:30:41.164313] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.219  [2024-12-16 06:30:41.164320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.219  [2024-12-16 06:30:41.164337] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.219  [2024-12-16 06:30:41.164393] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.219  [2024-12-16 06:30:41.164399] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.220  [2024-12-16 06:30:41.164402] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.164405] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.220  [2024-12-16 06:30:41.164414] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.164418] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.164422] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.220  [2024-12-16 06:30:41.164428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.220  [2024-12-16 06:30:41.164444] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.220  [2024-12-16 06:30:41.164527] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.220  [2024-12-16 06:30:41.164534] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.220  [2024-12-16 06:30:41.164537] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.164541] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.220  [2024-12-16 06:30:41.164550] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.164554] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.164557] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.220  [2024-12-16 06:30:41.164564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.220  [2024-12-16 06:30:41.164582] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.220  [2024-12-16 06:30:41.164645] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.220  [2024-12-16 06:30:41.164651] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.220  [2024-12-16 06:30:41.164654] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.164658] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.220  [2024-12-16 06:30:41.164667] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.164671] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.164674] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.220  [2024-12-16 06:30:41.164680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.220  [2024-12-16 06:30:41.164696] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.220  [2024-12-16 06:30:41.164769] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.220  [2024-12-16 06:30:41.164775] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.220  [2024-12-16 06:30:41.164778] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.164781] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.220  [2024-12-16 06:30:41.164806] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.164810] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.164813] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.220  [2024-12-16 06:30:41.164819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.220  [2024-12-16 06:30:41.164836] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.220  [2024-12-16 06:30:41.164896] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.220  [2024-12-16 06:30:41.164904] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.220  [2024-12-16 06:30:41.164907] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.164910] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.220  [2024-12-16 06:30:41.164920] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.164924] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.164928] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.220  [2024-12-16 06:30:41.164934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.220  [2024-12-16 06:30:41.164951] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.220  [2024-12-16 06:30:41.165017] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.220  [2024-12-16 06:30:41.165033] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.220  [2024-12-16 06:30:41.165037] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.165041] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.220  [2024-12-16 06:30:41.165051] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.165055] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.165059] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.220  [2024-12-16 06:30:41.165066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.220  [2024-12-16 06:30:41.165084] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.220  [2024-12-16 06:30:41.165146] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.220  [2024-12-16 06:30:41.165161] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.220  [2024-12-16 06:30:41.165165] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.165169] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.220  [2024-12-16 06:30:41.165193] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.165197] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.165200] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.220  [2024-12-16 06:30:41.165207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.220  [2024-12-16 06:30:41.165232] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.220  [2024-12-16 06:30:41.165291] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.220  [2024-12-16 06:30:41.165307] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.220  [2024-12-16 06:30:41.165311] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.165315] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.220  [2024-12-16 06:30:41.165324] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.165328] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.165332] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.220  [2024-12-16 06:30:41.165338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.220  [2024-12-16 06:30:41.165356] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.220  [2024-12-16 06:30:41.165416] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.220  [2024-12-16 06:30:41.165426] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.220  [2024-12-16 06:30:41.165430] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.165434] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.220  [2024-12-16 06:30:41.165443] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.165447] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.165450] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.220  [2024-12-16 06:30:41.165457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.220  [2024-12-16 06:30:41.165474] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.220  [2024-12-16 06:30:41.165568] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.220  [2024-12-16 06:30:41.165579] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.220  [2024-12-16 06:30:41.165583] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.165586] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.220  [2024-12-16 06:30:41.165597] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.165601] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.165604] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.220  [2024-12-16 06:30:41.165611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.220  [2024-12-16 06:30:41.165631] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.220  [2024-12-16 06:30:41.165700] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.220  [2024-12-16 06:30:41.165711] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.220  [2024-12-16 06:30:41.165714] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.165718] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.220  [2024-12-16 06:30:41.165728] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.165732] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.220  [2024-12-16 06:30:41.165736] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.220  [2024-12-16 06:30:41.165742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.221  [2024-12-16 06:30:41.165759] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.221  [2024-12-16 06:30:41.165843] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.221  [2024-12-16 06:30:41.165849] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.221  [2024-12-16 06:30:41.165852] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.165871] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.221  [2024-12-16 06:30:41.165881] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.165885] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.165888] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.221  [2024-12-16 06:30:41.165895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.221  [2024-12-16 06:30:41.165911] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.221  [2024-12-16 06:30:41.165980] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.221  [2024-12-16 06:30:41.165986] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.221  [2024-12-16 06:30:41.165989] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.165993] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.221  [2024-12-16 06:30:41.166002] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.166006] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.166009] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.221  [2024-12-16 06:30:41.166016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.221  [2024-12-16 06:30:41.166032] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.221  [2024-12-16 06:30:41.166104] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.221  [2024-12-16 06:30:41.166110] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.221  [2024-12-16 06:30:41.166113] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.166116] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.221  [2024-12-16 06:30:41.166126] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.166130] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.166133] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.221  [2024-12-16 06:30:41.166139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.221  [2024-12-16 06:30:41.166156] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.221  [2024-12-16 06:30:41.166207] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.221  [2024-12-16 06:30:41.166213] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.221  [2024-12-16 06:30:41.166216] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.166220] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.221  [2024-12-16 06:30:41.166229] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.166233] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.166236] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.221  [2024-12-16 06:30:41.166243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.221  [2024-12-16 06:30:41.166259] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.221  [2024-12-16 06:30:41.166323] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.221  [2024-12-16 06:30:41.166333] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.221  [2024-12-16 06:30:41.166337] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.166340] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.221  [2024-12-16 06:30:41.166350] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.166354] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.166357] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.221  [2024-12-16 06:30:41.166364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.221  [2024-12-16 06:30:41.166410] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.221  [2024-12-16 06:30:41.166481] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.221  [2024-12-16 06:30:41.166501] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.221  [2024-12-16 06:30:41.166506] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.166510] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.221  [2024-12-16 06:30:41.166522] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.166527] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.166531] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.221  [2024-12-16 06:30:41.166539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.221  [2024-12-16 06:30:41.166559] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.221  [2024-12-16 06:30:41.166627] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.221  [2024-12-16 06:30:41.166638] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.221  [2024-12-16 06:30:41.166642] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.166646] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.221  [2024-12-16 06:30:41.166657] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.166662] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.166666] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.221  [2024-12-16 06:30:41.166673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.221  [2024-12-16 06:30:41.166692] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.221  [2024-12-16 06:30:41.166785] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.221  [2024-12-16 06:30:41.166791] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.221  [2024-12-16 06:30:41.166809] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.166813] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.221  [2024-12-16 06:30:41.166837] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.166841] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.166844] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.221  [2024-12-16 06:30:41.166850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.221  [2024-12-16 06:30:41.166867] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.221  [2024-12-16 06:30:41.166927] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.221  [2024-12-16 06:30:41.166933] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.221  [2024-12-16 06:30:41.166936] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.166939] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.221  [2024-12-16 06:30:41.166948] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.166952] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.166956] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.221  [2024-12-16 06:30:41.166963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.221  [2024-12-16 06:30:41.166979] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.221  [2024-12-16 06:30:41.167043] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.221  [2024-12-16 06:30:41.167049] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.221  [2024-12-16 06:30:41.167052] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.167056] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.221  [2024-12-16 06:30:41.167065] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.221  [2024-12-16 06:30:41.167069] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.222  [2024-12-16 06:30:41.167072] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.222  [2024-12-16 06:30:41.167078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.222  [2024-12-16 06:30:41.167095] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.222  [2024-12-16 06:30:41.167172] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.222  [2024-12-16 06:30:41.167178] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.222  [2024-12-16 06:30:41.167181] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.222  [2024-12-16 06:30:41.167185] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.222  [2024-12-16 06:30:41.167194] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.222  [2024-12-16 06:30:41.167198] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.222  [2024-12-16 06:30:41.167201] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.222  [2024-12-16 06:30:41.167208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.222  [2024-12-16 06:30:41.167224] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.222  [2024-12-16 06:30:41.167315] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.222  [2024-12-16 06:30:41.167321] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.222  [2024-12-16 06:30:41.167324] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.222  [2024-12-16 06:30:41.167327] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.222  [2024-12-16 06:30:41.167336] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.222  [2024-12-16 06:30:41.167340] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.222  [2024-12-16 06:30:41.167344] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.222  [2024-12-16 06:30:41.167350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.222  [2024-12-16 06:30:41.167366] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.222  [2024-12-16 06:30:41.167432] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.222  [2024-12-16 06:30:41.167438] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.222  [2024-12-16 06:30:41.167441] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.222  [2024-12-16 06:30:41.167445] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.222  [2024-12-16 06:30:41.167454] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.222  [2024-12-16 06:30:41.167458] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.222  [2024-12-16 06:30:41.167461] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.222  [2024-12-16 06:30:41.167468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.222  [2024-12-16 06:30:41.167484] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.222  [2024-12-16 06:30:41.167564] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.222  [2024-12-16 06:30:41.167571] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.222  [2024-12-16 06:30:41.167574] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.222  [2024-12-16 06:30:41.167577] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.222  [2024-12-16 06:30:41.171554] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter
00:20:24.222  [2024-12-16 06:30:41.171571] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:20:24.222  [2024-12-16 06:30:41.171575] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x861d30)
00:20:24.222  [2024-12-16 06:30:41.171600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:20:24.222  [2024-12-16 06:30:41.171625] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c0350, cid 3, qid 0
00:20:24.222  [2024-12-16 06:30:41.171703] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:20:24.222  [2024-12-16 06:30:41.171709] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:20:24.222  [2024-12-16 06:30:41.171713] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:20:24.222  [2024-12-16 06:30:41.171716] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8c0350) on tqpair=0x861d30
00:20:24.222  [2024-12-16 06:30:41.171724] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds
00:20:24.481      0 Kelvin (-273 Celsius)
00:20:24.481  Available Spare:             0%
00:20:24.481  Available Spare Threshold:   0%
00:20:24.481  Life Percentage Used:        0%
00:20:24.481  Data Units Read:             0
00:20:24.481  Data Units Written:          0
00:20:24.481  Host Read Commands:          0
00:20:24.481  Host Write Commands:         0
00:20:24.481  Controller Busy Time:        0 minutes
00:20:24.481  Power Cycles:                0
00:20:24.481  Power On Hours:              0 hours
00:20:24.481  Unsafe Shutdowns:            0
00:20:24.481  Unrecoverable Media Errors:  0
00:20:24.481  Lifetime Error Log Entries:  0
00:20:24.481  Warning Temperature Time:    0 minutes
00:20:24.481  Critical Temperature Time:   0 minutes
00:20:24.481  
00:20:24.481  Number of Queues
00:20:24.481  ================
00:20:24.481  Number of I/O Submission Queues:      127
00:20:24.481  Number of I/O Completion Queues:      127
00:20:24.481  
00:20:24.481  Active Namespaces
00:20:24.481  =================
00:20:24.481  Namespace ID:1
00:20:24.481  Error Recovery Timeout:                Unlimited
00:20:24.481  Command Set Identifier:                NVM (00h)
00:20:24.481  Deallocate:                            Supported
00:20:24.481  Deallocated/Unwritten Error:           Not Supported
00:20:24.481  Deallocated Read Value:                Unknown
00:20:24.481  Deallocate in Write Zeroes:            Not Supported
00:20:24.481  Deallocated Guard Field:               0xFFFF
00:20:24.481  Flush:                                 Supported
00:20:24.481  Reservation:                           Supported
00:20:24.481  Namespace Sharing Capabilities:        Multiple Controllers
00:20:24.481  Size (in LBAs):                        131072 (0GiB)
00:20:24.481  Capacity (in LBAs):                    131072 (0GiB)
00:20:24.481  Utilization (in LBAs):                 131072 (0GiB)
00:20:24.481  NGUID:                                 ABCDEF0123456789ABCDEF0123456789
00:20:24.481  EUI64:                                 ABCDEF0123456789
00:20:24.481  UUID:                                  826d191e-d2fd-4ff4-86e5-7226fa98ceed
00:20:24.481  Thin Provisioning:                     Not Supported
00:20:24.481  Per-NS Atomic Units:                   Yes
00:20:24.481    Atomic Boundary Size (Normal):       0
00:20:24.481    Atomic Boundary Size (PFail):        0
00:20:24.481    Atomic Boundary Offset:              0
00:20:24.481  Maximum Single Source Range Length:    65535
00:20:24.481  Maximum Copy Length:                   65535
00:20:24.481  Maximum Source Range Count:            1
00:20:24.481  NGUID/EUI64 Never Reused:              No
00:20:24.481  Namespace Write Protected:             No
00:20:24.481  Number of LBA Formats:                 1
00:20:24.481  Current LBA Format:                    LBA Format #00
00:20:24.481  LBA Format #00: Data Size:   512  Metadata Size:     0
00:20:24.481  
00:20:24.481   06:30:41	-- host/identify.sh@51 -- # sync
00:20:24.481   06:30:41	-- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:20:24.481   06:30:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:24.481   06:30:41	-- common/autotest_common.sh@10 -- # set +x
00:20:24.481   06:30:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:24.481   06:30:41	-- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT
00:20:24.481   06:30:41	-- host/identify.sh@56 -- # nvmftestfini
00:20:24.481   06:30:41	-- nvmf/common.sh@476 -- # nvmfcleanup
00:20:24.481   06:30:41	-- nvmf/common.sh@116 -- # sync
00:20:24.481   06:30:41	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:20:24.481   06:30:41	-- nvmf/common.sh@119 -- # set +e
00:20:24.481   06:30:41	-- nvmf/common.sh@120 -- # for i in {1..20}
00:20:24.481   06:30:41	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:20:24.481  rmmod nvme_tcp
00:20:24.481  rmmod nvme_fabrics
00:20:24.481  rmmod nvme_keyring
00:20:24.482   06:30:41	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:20:24.482   06:30:41	-- nvmf/common.sh@123 -- # set -e
00:20:24.482   06:30:41	-- nvmf/common.sh@124 -- # return 0
00:20:24.482   06:30:41	-- nvmf/common.sh@477 -- # '[' -n 82727 ']'
00:20:24.482   06:30:41	-- nvmf/common.sh@478 -- # killprocess 82727
00:20:24.482   06:30:41	-- common/autotest_common.sh@936 -- # '[' -z 82727 ']'
00:20:24.482   06:30:41	-- common/autotest_common.sh@940 -- # kill -0 82727
00:20:24.482    06:30:41	-- common/autotest_common.sh@941 -- # uname
00:20:24.482   06:30:41	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:20:24.482    06:30:41	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82727
00:20:24.482  killing process with pid 82727
00:20:24.482   06:30:41	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:20:24.482   06:30:41	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:20:24.482   06:30:41	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 82727'
00:20:24.482   06:30:41	-- common/autotest_common.sh@955 -- # kill 82727
00:20:24.482  [2024-12-16 06:30:41.334288] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times
00:20:24.482   06:30:41	-- common/autotest_common.sh@960 -- # wait 82727
00:20:24.741   06:30:41	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:20:24.741   06:30:41	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:20:24.741   06:30:41	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:20:24.741   06:30:41	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:20:24.741   06:30:41	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:20:24.741   06:30:41	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:20:24.741   06:30:41	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:20:24.741    06:30:41	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:20:24.741   06:30:41	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:20:24.741  
00:20:24.741  real	0m2.730s
00:20:24.741  user	0m7.605s
00:20:24.741  sys	0m0.694s
00:20:24.741  ************************************
00:20:24.741  END TEST nvmf_identify
00:20:24.741  ************************************
00:20:24.741   06:30:41	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:20:24.741   06:30:41	-- common/autotest_common.sh@10 -- # set +x
00:20:24.741   06:30:41	-- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp
00:20:24.741   06:30:41	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:20:24.741   06:30:41	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:20:24.741   06:30:41	-- common/autotest_common.sh@10 -- # set +x
00:20:24.741  ************************************
00:20:24.741  START TEST nvmf_perf
00:20:24.741  ************************************
00:20:24.741   06:30:41	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp
00:20:25.001  * Looking for test storage...
00:20:25.001  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:20:25.001    06:30:41	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:20:25.001     06:30:41	-- common/autotest_common.sh@1690 -- # lcov --version
00:20:25.001     06:30:41	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:20:25.001    06:30:41	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:20:25.001    06:30:41	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:20:25.001    06:30:41	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:20:25.001    06:30:41	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:20:25.001    06:30:41	-- scripts/common.sh@335 -- # IFS=.-:
00:20:25.001    06:30:41	-- scripts/common.sh@335 -- # read -ra ver1
00:20:25.001    06:30:41	-- scripts/common.sh@336 -- # IFS=.-:
00:20:25.001    06:30:41	-- scripts/common.sh@336 -- # read -ra ver2
00:20:25.001    06:30:41	-- scripts/common.sh@337 -- # local 'op=<'
00:20:25.001    06:30:41	-- scripts/common.sh@339 -- # ver1_l=2
00:20:25.001    06:30:41	-- scripts/common.sh@340 -- # ver2_l=1
00:20:25.001    06:30:41	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:20:25.001    06:30:41	-- scripts/common.sh@343 -- # case "$op" in
00:20:25.001    06:30:41	-- scripts/common.sh@344 -- # : 1
00:20:25.001    06:30:41	-- scripts/common.sh@363 -- # (( v = 0 ))
00:20:25.001    06:30:41	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:25.001     06:30:41	-- scripts/common.sh@364 -- # decimal 1
00:20:25.001     06:30:41	-- scripts/common.sh@352 -- # local d=1
00:20:25.001     06:30:41	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:25.001     06:30:41	-- scripts/common.sh@354 -- # echo 1
00:20:25.001    06:30:41	-- scripts/common.sh@364 -- # ver1[v]=1
00:20:25.001     06:30:41	-- scripts/common.sh@365 -- # decimal 2
00:20:25.001     06:30:41	-- scripts/common.sh@352 -- # local d=2
00:20:25.001     06:30:41	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:25.001     06:30:41	-- scripts/common.sh@354 -- # echo 2
00:20:25.001    06:30:41	-- scripts/common.sh@365 -- # ver2[v]=2
00:20:25.001    06:30:41	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:20:25.001    06:30:41	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:20:25.001    06:30:41	-- scripts/common.sh@367 -- # return 0
00:20:25.001    06:30:41	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:25.001    06:30:41	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:20:25.001  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:25.001  		--rc genhtml_branch_coverage=1
00:20:25.001  		--rc genhtml_function_coverage=1
00:20:25.001  		--rc genhtml_legend=1
00:20:25.001  		--rc geninfo_all_blocks=1
00:20:25.001  		--rc geninfo_unexecuted_blocks=1
00:20:25.001  		
00:20:25.001  		'
00:20:25.001    06:30:41	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:20:25.001  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:25.001  		--rc genhtml_branch_coverage=1
00:20:25.001  		--rc genhtml_function_coverage=1
00:20:25.001  		--rc genhtml_legend=1
00:20:25.001  		--rc geninfo_all_blocks=1
00:20:25.001  		--rc geninfo_unexecuted_blocks=1
00:20:25.001  		
00:20:25.001  		'
00:20:25.001    06:30:41	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:20:25.001  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:25.001  		--rc genhtml_branch_coverage=1
00:20:25.001  		--rc genhtml_function_coverage=1
00:20:25.001  		--rc genhtml_legend=1
00:20:25.001  		--rc geninfo_all_blocks=1
00:20:25.001  		--rc geninfo_unexecuted_blocks=1
00:20:25.002  		
00:20:25.002  		'
00:20:25.002    06:30:41	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:20:25.002  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:25.002  		--rc genhtml_branch_coverage=1
00:20:25.002  		--rc genhtml_function_coverage=1
00:20:25.002  		--rc genhtml_legend=1
00:20:25.002  		--rc geninfo_all_blocks=1
00:20:25.002  		--rc geninfo_unexecuted_blocks=1
00:20:25.002  		
00:20:25.002  		'
00:20:25.002   06:30:41	-- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:20:25.002     06:30:41	-- nvmf/common.sh@7 -- # uname -s
00:20:25.002    06:30:41	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:20:25.002    06:30:41	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:20:25.002    06:30:41	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:20:25.002    06:30:41	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:20:25.002    06:30:41	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:20:25.002    06:30:41	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:20:25.002    06:30:41	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:20:25.002    06:30:41	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:20:25.002    06:30:41	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:20:25.002     06:30:41	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:20:25.002    06:30:41	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:20:25.002    06:30:41	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:20:25.002    06:30:41	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:20:25.002    06:30:41	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:20:25.002    06:30:41	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:20:25.002    06:30:41	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:20:25.002     06:30:41	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:20:25.002     06:30:41	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:20:25.002     06:30:41	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:20:25.002      06:30:41	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:25.002      06:30:41	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:25.002      06:30:41	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:25.002      06:30:41	-- paths/export.sh@5 -- # export PATH
00:20:25.002      06:30:41	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:25.002    06:30:41	-- nvmf/common.sh@46 -- # : 0
00:20:25.002    06:30:41	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:20:25.002    06:30:41	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:20:25.002    06:30:41	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:20:25.002    06:30:41	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:20:25.002    06:30:41	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:20:25.002    06:30:41	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:20:25.002    06:30:41	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:20:25.002    06:30:41	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:20:25.002   06:30:41	-- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64
00:20:25.002   06:30:41	-- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512
00:20:25.002   06:30:41	-- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:20:25.002   06:30:41	-- host/perf.sh@17 -- # nvmftestinit
00:20:25.002   06:30:41	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:20:25.002   06:30:41	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:20:25.002   06:30:41	-- nvmf/common.sh@436 -- # prepare_net_devs
00:20:25.002   06:30:41	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:20:25.002   06:30:41	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:20:25.002   06:30:41	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:20:25.002   06:30:41	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:20:25.002    06:30:41	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:20:25.002   06:30:41	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:20:25.002   06:30:41	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:20:25.002   06:30:41	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:20:25.002   06:30:41	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:20:25.002   06:30:41	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:20:25.002   06:30:41	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:20:25.002   06:30:41	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:20:25.002   06:30:41	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:20:25.002   06:30:41	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:20:25.002   06:30:41	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:20:25.002   06:30:41	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:20:25.002   06:30:41	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:20:25.002   06:30:41	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:20:25.002   06:30:41	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:20:25.002   06:30:41	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:20:25.002   06:30:41	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:20:25.002   06:30:41	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:20:25.002   06:30:41	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:20:25.002   06:30:41	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:20:25.002   06:30:41	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:20:25.002  Cannot find device "nvmf_tgt_br"
00:20:25.002   06:30:41	-- nvmf/common.sh@154 -- # true
00:20:25.002   06:30:41	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:20:25.002  Cannot find device "nvmf_tgt_br2"
00:20:25.002   06:30:41	-- nvmf/common.sh@155 -- # true
00:20:25.002   06:30:41	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:20:25.002   06:30:41	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:20:25.002  Cannot find device "nvmf_tgt_br"
00:20:25.002   06:30:41	-- nvmf/common.sh@157 -- # true
00:20:25.002   06:30:41	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:20:25.261  Cannot find device "nvmf_tgt_br2"
00:20:25.261   06:30:41	-- nvmf/common.sh@158 -- # true
00:20:25.261   06:30:41	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:20:25.261   06:30:42	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:20:25.261   06:30:42	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:20:25.261  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:20:25.261   06:30:42	-- nvmf/common.sh@161 -- # true
00:20:25.261   06:30:42	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:20:25.261  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:20:25.261   06:30:42	-- nvmf/common.sh@162 -- # true
00:20:25.261   06:30:42	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:20:25.262   06:30:42	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:20:25.262   06:30:42	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:20:25.262   06:30:42	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:20:25.262   06:30:42	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:20:25.262   06:30:42	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:20:25.262   06:30:42	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:20:25.262   06:30:42	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:20:25.262   06:30:42	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:20:25.262   06:30:42	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:20:25.262   06:30:42	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:20:25.262   06:30:42	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:20:25.262   06:30:42	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:20:25.262   06:30:42	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:20:25.262   06:30:42	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:20:25.262   06:30:42	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:20:25.262   06:30:42	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:20:25.262   06:30:42	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:20:25.262   06:30:42	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:20:25.262   06:30:42	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:20:25.262   06:30:42	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:20:25.262   06:30:42	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:20:25.262   06:30:42	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:20:25.521   06:30:42	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:20:25.521  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:20:25.521  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms
00:20:25.521  
00:20:25.521  --- 10.0.0.2 ping statistics ---
00:20:25.521  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:25.521  rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms
00:20:25.521   06:30:42	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:20:25.521  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:20:25.521  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms
00:20:25.521  
00:20:25.521  --- 10.0.0.3 ping statistics ---
00:20:25.521  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:25.521  rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms
00:20:25.521   06:30:42	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:20:25.521  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:20:25.521  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms
00:20:25.521  
00:20:25.521  --- 10.0.0.1 ping statistics ---
00:20:25.521  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:25.521  rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms
00:20:25.521   06:30:42	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:20:25.521   06:30:42	-- nvmf/common.sh@421 -- # return 0
00:20:25.521   06:30:42	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:20:25.521   06:30:42	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:20:25.521   06:30:42	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:20:25.521   06:30:42	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:20:25.521   06:30:42	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:20:25.521   06:30:42	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:20:25.521   06:30:42	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:20:25.521   06:30:42	-- host/perf.sh@18 -- # nvmfappstart -m 0xF
00:20:25.521   06:30:42	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:20:25.521   06:30:42	-- common/autotest_common.sh@722 -- # xtrace_disable
00:20:25.521   06:30:42	-- common/autotest_common.sh@10 -- # set +x
00:20:25.521   06:30:42	-- nvmf/common.sh@469 -- # nvmfpid=82963
00:20:25.521   06:30:42	-- nvmf/common.sh@470 -- # waitforlisten 82963
00:20:25.521   06:30:42	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:20:25.521   06:30:42	-- common/autotest_common.sh@829 -- # '[' -z 82963 ']'
00:20:25.521   06:30:42	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:25.521  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:25.521   06:30:42	-- common/autotest_common.sh@834 -- # local max_retries=100
00:20:25.521   06:30:42	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:25.521   06:30:42	-- common/autotest_common.sh@838 -- # xtrace_disable
00:20:25.521   06:30:42	-- common/autotest_common.sh@10 -- # set +x
00:20:25.521  [2024-12-16 06:30:42.334218] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:20:25.521  [2024-12-16 06:30:42.334331] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:20:25.521  [2024-12-16 06:30:42.473353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:20:25.781  [2024-12-16 06:30:42.548479] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:20:25.781  [2024-12-16 06:30:42.548621] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:20:25.781  [2024-12-16 06:30:42.548633] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:20:25.781  [2024-12-16 06:30:42.548640] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:20:25.781  [2024-12-16 06:30:42.548805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:20:25.781  [2024-12-16 06:30:42.549374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:20:25.781  [2024-12-16 06:30:42.549611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:20:25.781  [2024-12-16 06:30:42.549615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:20:26.351   06:30:43	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:20:26.351   06:30:43	-- common/autotest_common.sh@862 -- # return 0
00:20:26.351   06:30:43	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:20:26.351   06:30:43	-- common/autotest_common.sh@728 -- # xtrace_disable
00:20:26.351   06:30:43	-- common/autotest_common.sh@10 -- # set +x
00:20:26.351   06:30:43	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:20:26.351   06:30:43	-- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:20:26.351   06:30:43	-- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config
00:20:26.920    06:30:43	-- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev
00:20:26.920    06:30:43	-- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr'
00:20:27.179   06:30:44	-- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0
00:20:27.179    06:30:44	-- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:20:27.438   06:30:44	-- host/perf.sh@31 -- # bdevs=' Malloc0'
00:20:27.438   06:30:44	-- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']'
00:20:27.438   06:30:44	-- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1'
00:20:27.438   06:30:44	-- host/perf.sh@37 -- # '[' tcp == rdma ']'
00:20:27.438   06:30:44	-- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o
00:20:27.697  [2024-12-16 06:30:44.526662] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:20:27.697   06:30:44	-- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:20:27.957   06:30:44	-- host/perf.sh@45 -- # for bdev in $bdevs
00:20:27.957   06:30:44	-- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:20:28.227   06:30:45	-- host/perf.sh@45 -- # for bdev in $bdevs
00:20:28.227   06:30:45	-- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1
00:20:28.485   06:30:45	-- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:20:28.744  [2024-12-16 06:30:45.480422] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:20:28.744   06:30:45	-- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:20:28.744   06:30:45	-- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']'
00:20:28.744   06:30:45	-- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0'
00:20:28.744   06:30:45	-- host/perf.sh@21 -- # '[' 0 -eq 1 ']'
00:20:28.744   06:30:45	-- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0'
00:20:30.124  Initializing NVMe Controllers
00:20:30.124  Attached to NVMe Controller at 0000:00:06.0 [1b36:0010]
00:20:30.124  Associating PCIE (0000:00:06.0) NSID 1 with lcore 0
00:20:30.124  Initialization complete. Launching workers.
00:20:30.124  ========================================================
00:20:30.124                                                                             Latency(us)
00:20:30.124  Device Information                     :       IOPS      MiB/s    Average        min        max
00:20:30.124  PCIE (0000:00:06.0) NSID 1 from core  0:   20095.98      78.50    1591.86     399.20    9119.64
00:20:30.124  ========================================================
00:20:30.124  Total                                  :   20095.98      78.50    1591.86     399.20    9119.64
00:20:30.124  
00:20:30.124   06:30:46	-- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:20:31.502  Initializing NVMe Controllers
00:20:31.502  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:20:31.502  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:20:31.502  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:20:31.502  Initialization complete. Launching workers.
00:20:31.502  ========================================================
00:20:31.502                                                                                                               Latency(us)
00:20:31.502  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:20:31.502  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:    3741.56      14.62     267.06     101.33    6053.32
00:20:31.502  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:     123.62       0.48    8144.61    5881.82   12027.83
00:20:31.502  ========================================================
00:20:31.502  Total                                                                    :    3865.18      15.10     519.01     101.33   12027.83
00:20:31.502  
00:20:31.502   06:30:48	-- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:20:32.879  [2024-12-16 06:30:49.425520] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf0fc0 is same with the state(5) to be set
00:20:32.879  [2024-12-16 06:30:49.425945] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf0fc0 is same with the state(5) to be set
00:20:32.879  [2024-12-16 06:30:49.426052] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf0fc0 is same with the state(5) to be set
00:20:32.879  [2024-12-16 06:30:49.426117] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf0fc0 is same with the state(5) to be set
00:20:32.879  [2024-12-16 06:30:49.426168] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf0fc0 is same with the state(5) to be set
00:20:32.879  Initializing NVMe Controllers
00:20:32.879  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:20:32.879  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:20:32.879  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:20:32.879  Initialization complete. Launching workers.
00:20:32.879  ========================================================
00:20:32.879                                                                                                               Latency(us)
00:20:32.879  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:20:32.879  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:   10704.97      41.82    2991.34     592.05    6851.75
00:20:32.879  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:    2686.99      10.50   12009.26    5877.01   20352.32
00:20:32.879  ========================================================
00:20:32.879  Total                                                                    :   13391.96      52.31    4800.71     592.05   20352.32
00:20:32.879  
00:20:32.879   06:30:49	-- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]]
00:20:32.879   06:30:49	-- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:20:35.412  Initializing NVMe Controllers
00:20:35.412  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:20:35.412  Controller IO queue size 128, less than required.
00:20:35.412  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:20:35.412  Controller IO queue size 128, less than required.
00:20:35.412  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:20:35.412  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:20:35.412  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:20:35.412  Initialization complete. Launching workers.
00:20:35.412  ========================================================
00:20:35.412                                                                                                               Latency(us)
00:20:35.412  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:20:35.412  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:    1634.47     408.62   79106.56   46956.52  151532.11
00:20:35.412  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:     533.99     133.50  261503.75  111740.68  531469.69
00:20:35.412  ========================================================
00:20:35.412  Total                                                                    :    2168.47     542.12  124022.44   46956.52  531469.69
00:20:35.412  
00:20:35.412   06:30:52	-- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4
00:20:35.671  No valid NVMe controllers or AIO or URING devices found
00:20:35.671  Initializing NVMe Controllers
00:20:35.671  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:20:35.671  Controller IO queue size 128, less than required.
00:20:35.671  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:20:35.671  WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test
00:20:35.671  Controller IO queue size 128, less than required.
00:20:35.671  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:20:35.671  WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test
00:20:35.671  WARNING: Some requested NVMe devices were skipped
00:20:35.671   06:30:52	-- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat
00:20:38.208  Initializing NVMe Controllers
00:20:38.208  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:20:38.208  Controller IO queue size 128, less than required.
00:20:38.208  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:20:38.208  Controller IO queue size 128, less than required.
00:20:38.208  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:20:38.208  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:20:38.208  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:20:38.208  Initialization complete. Launching workers.
00:20:38.208  
00:20:38.208  ====================
00:20:38.208  lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics:
00:20:38.208  TCP transport:
00:20:38.208  	polls:              8653
00:20:38.208  	idle_polls:         5133
00:20:38.208  	sock_completions:   3520
00:20:38.208  	nvme_completions:   4413
00:20:38.208  	submitted_requests: 6822
00:20:38.208  	queued_requests:    1
00:20:38.208  
00:20:38.208  ====================
00:20:38.208  lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics:
00:20:38.208  TCP transport:
00:20:38.208  	polls:              8899
00:20:38.208  	idle_polls:         5807
00:20:38.208  	sock_completions:   3092
00:20:38.208  	nvme_completions:   5845
00:20:38.208  	submitted_requests: 8895
00:20:38.208  	queued_requests:    1
00:20:38.208  ========================================================
00:20:38.208                                                                                                               Latency(us)
00:20:38.208  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:20:38.208  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:    1166.21     291.55  112045.90   72741.07  180071.44
00:20:38.208  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:    1523.96     380.99   84793.38   44484.61  138527.89
00:20:38.208  ========================================================
00:20:38.208  Total                                                                    :    2690.17     672.54   96607.53   44484.61  180071.44
00:20:38.208  
00:20:38.208   06:30:54	-- host/perf.sh@66 -- # sync
00:20:38.208   06:30:54	-- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:20:38.473   06:30:55	-- host/perf.sh@69 -- # '[' 1 -eq 1 ']'
00:20:38.473   06:30:55	-- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']'
00:20:38.473    06:30:55	-- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0
00:20:38.733   06:30:55	-- host/perf.sh@72 -- # ls_guid=ec459490-0e57-4de7-b7a1-903b666d13d5
00:20:38.733   06:30:55	-- host/perf.sh@73 -- # get_lvs_free_mb ec459490-0e57-4de7-b7a1-903b666d13d5
00:20:38.733   06:30:55	-- common/autotest_common.sh@1353 -- # local lvs_uuid=ec459490-0e57-4de7-b7a1-903b666d13d5
00:20:38.733   06:30:55	-- common/autotest_common.sh@1354 -- # local lvs_info
00:20:38.733   06:30:55	-- common/autotest_common.sh@1355 -- # local fc
00:20:38.733   06:30:55	-- common/autotest_common.sh@1356 -- # local cs
00:20:38.733    06:30:55	-- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:20:38.991   06:30:55	-- common/autotest_common.sh@1357 -- # lvs_info='[
00:20:38.991    {
00:20:38.991      "base_bdev": "Nvme0n1",
00:20:38.991      "block_size": 4096,
00:20:38.991      "cluster_size": 4194304,
00:20:38.991      "free_clusters": 1278,
00:20:38.991      "name": "lvs_0",
00:20:38.991      "total_data_clusters": 1278,
00:20:38.991      "uuid": "ec459490-0e57-4de7-b7a1-903b666d13d5"
00:20:38.991    }
00:20:38.991  ]'
00:20:38.991    06:30:55	-- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="ec459490-0e57-4de7-b7a1-903b666d13d5") .free_clusters'
00:20:38.991   06:30:55	-- common/autotest_common.sh@1358 -- # fc=1278
00:20:38.991    06:30:55	-- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="ec459490-0e57-4de7-b7a1-903b666d13d5") .cluster_size'
00:20:38.991  5112
00:20:38.991   06:30:55	-- common/autotest_common.sh@1359 -- # cs=4194304
00:20:38.991   06:30:55	-- common/autotest_common.sh@1362 -- # free_mb=5112
00:20:38.991   06:30:55	-- common/autotest_common.sh@1363 -- # echo 5112
00:20:38.991   06:30:55	-- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']'
00:20:38.991    06:30:55	-- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ec459490-0e57-4de7-b7a1-903b666d13d5 lbd_0 5112
00:20:39.250   06:30:56	-- host/perf.sh@80 -- # lb_guid=70e0cd8a-9c71-4135-9e28-9b0dc715fc93
00:20:39.250    06:30:56	-- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 70e0cd8a-9c71-4135-9e28-9b0dc715fc93 lvs_n_0
00:20:39.817   06:30:56	-- host/perf.sh@83 -- # ls_nested_guid=2869d407-efd2-462d-ae7a-c69e595f364f
00:20:39.817   06:30:56	-- host/perf.sh@84 -- # get_lvs_free_mb 2869d407-efd2-462d-ae7a-c69e595f364f
00:20:39.817   06:30:56	-- common/autotest_common.sh@1353 -- # local lvs_uuid=2869d407-efd2-462d-ae7a-c69e595f364f
00:20:39.817   06:30:56	-- common/autotest_common.sh@1354 -- # local lvs_info
00:20:39.817   06:30:56	-- common/autotest_common.sh@1355 -- # local fc
00:20:39.817   06:30:56	-- common/autotest_common.sh@1356 -- # local cs
00:20:39.817    06:30:56	-- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:20:40.076   06:30:56	-- common/autotest_common.sh@1357 -- # lvs_info='[
00:20:40.076    {
00:20:40.076      "base_bdev": "Nvme0n1",
00:20:40.076      "block_size": 4096,
00:20:40.076      "cluster_size": 4194304,
00:20:40.076      "free_clusters": 0,
00:20:40.076      "name": "lvs_0",
00:20:40.076      "total_data_clusters": 1278,
00:20:40.076      "uuid": "ec459490-0e57-4de7-b7a1-903b666d13d5"
00:20:40.076    },
00:20:40.076    {
00:20:40.076      "base_bdev": "70e0cd8a-9c71-4135-9e28-9b0dc715fc93",
00:20:40.076      "block_size": 4096,
00:20:40.076      "cluster_size": 4194304,
00:20:40.076      "free_clusters": 1276,
00:20:40.076      "name": "lvs_n_0",
00:20:40.076      "total_data_clusters": 1276,
00:20:40.076      "uuid": "2869d407-efd2-462d-ae7a-c69e595f364f"
00:20:40.076    }
00:20:40.076  ]'
00:20:40.076    06:30:56	-- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="2869d407-efd2-462d-ae7a-c69e595f364f") .free_clusters'
00:20:40.076   06:30:56	-- common/autotest_common.sh@1358 -- # fc=1276
00:20:40.076    06:30:56	-- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="2869d407-efd2-462d-ae7a-c69e595f364f") .cluster_size'
00:20:40.076  5104
00:20:40.076   06:30:56	-- common/autotest_common.sh@1359 -- # cs=4194304
00:20:40.076   06:30:56	-- common/autotest_common.sh@1362 -- # free_mb=5104
00:20:40.076   06:30:56	-- common/autotest_common.sh@1363 -- # echo 5104
00:20:40.076   06:30:56	-- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']'
00:20:40.076    06:30:56	-- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2869d407-efd2-462d-ae7a-c69e595f364f lbd_nest_0 5104
00:20:40.335   06:30:57	-- host/perf.sh@88 -- # lb_nested_guid=d6014468-3dce-49dd-aac6-36848c77d2c9
00:20:40.335   06:30:57	-- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:20:40.594   06:30:57	-- host/perf.sh@90 -- # for bdev in $lb_nested_guid
00:20:40.594   06:30:57	-- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 d6014468-3dce-49dd-aac6-36848c77d2c9
00:20:40.852   06:30:57	-- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:20:41.111   06:30:57	-- host/perf.sh@95 -- # qd_depth=("1" "32" "128")
00:20:41.111   06:30:57	-- host/perf.sh@96 -- # io_size=("512" "131072")
00:20:41.111   06:30:57	-- host/perf.sh@97 -- # for qd in "${qd_depth[@]}"
00:20:41.111   06:30:57	-- host/perf.sh@98 -- # for o in "${io_size[@]}"
00:20:41.111   06:30:57	-- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:20:41.369  No valid NVMe controllers or AIO or URING devices found
00:20:41.369  Initializing NVMe Controllers
00:20:41.369  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:20:41.369  WARNING: controller SPDK bdev Controller (SPDK00000000000001  ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512
00:20:41.369  WARNING: Some requested NVMe devices were skipped
00:20:41.369   06:30:58	-- host/perf.sh@98 -- # for o in "${io_size[@]}"
00:20:41.369   06:30:58	-- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:20:53.613  Initializing NVMe Controllers
00:20:53.613  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:20:53.613  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:20:53.613  Initialization complete. Launching workers.
00:20:53.613  ========================================================
00:20:53.613                                                                                                               Latency(us)
00:20:53.613  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:20:53.613  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:     850.60     106.32    1175.35     393.97    8442.80
00:20:53.613  ========================================================
00:20:53.613  Total                                                                    :     850.60     106.32    1175.35     393.97    8442.80
00:20:53.613  
00:20:53.613   06:31:08	-- host/perf.sh@97 -- # for qd in "${qd_depth[@]}"
00:20:53.613   06:31:08	-- host/perf.sh@98 -- # for o in "${io_size[@]}"
00:20:53.613   06:31:08	-- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:20:53.613  No valid NVMe controllers or AIO or URING devices found
00:20:53.613  Initializing NVMe Controllers
00:20:53.613  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:20:53.613  WARNING: controller SPDK bdev Controller (SPDK00000000000001  ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512
00:20:53.613  WARNING: Some requested NVMe devices were skipped
00:20:53.613   06:31:08	-- host/perf.sh@98 -- # for o in "${io_size[@]}"
00:20:53.613   06:31:08	-- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:21:03.590  Initializing NVMe Controllers
00:21:03.590  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:21:03.590  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:21:03.590  Initialization complete. Launching workers.
00:21:03.590  ========================================================
00:21:03.590                                                                                                               Latency(us)
00:21:03.590  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:21:03.590  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:     982.40     122.80   32630.74    7993.49  251928.64
00:21:03.590  ========================================================
00:21:03.590  Total                                                                    :     982.40     122.80   32630.74    7993.49  251928.64
00:21:03.590  
00:21:03.590   06:31:19	-- host/perf.sh@97 -- # for qd in "${qd_depth[@]}"
00:21:03.590   06:31:19	-- host/perf.sh@98 -- # for o in "${io_size[@]}"
00:21:03.590   06:31:19	-- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:21:03.590  No valid NVMe controllers or AIO or URING devices found
00:21:03.590  Initializing NVMe Controllers
00:21:03.590  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:21:03.590  WARNING: controller SPDK bdev Controller (SPDK00000000000001  ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512
00:21:03.590  WARNING: Some requested NVMe devices were skipped
00:21:03.590   06:31:19	-- host/perf.sh@98 -- # for o in "${io_size[@]}"
00:21:03.590   06:31:19	-- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:21:13.574  Initializing NVMe Controllers
00:21:13.574  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:21:13.574  Controller IO queue size 128, less than required.
00:21:13.574  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:21:13.574  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:21:13.574  Initialization complete. Launching workers.
00:21:13.574  ========================================================
00:21:13.574                                                                                                               Latency(us)
00:21:13.574  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:21:13.574  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:    3893.48     486.69   32927.44   12796.96   64926.57
00:21:13.574  ========================================================
00:21:13.574  Total                                                                    :    3893.48     486.69   32927.44   12796.96   64926.57
00:21:13.574  
00:21:13.574   06:31:29	-- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:21:13.574   06:31:29	-- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d6014468-3dce-49dd-aac6-36848c77d2c9
00:21:13.574   06:31:30	-- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0
00:21:13.833   06:31:30	-- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 70e0cd8a-9c71-4135-9e28-9b0dc715fc93
00:21:13.833   06:31:30	-- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0
00:21:14.091   06:31:31	-- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT
00:21:14.091   06:31:31	-- host/perf.sh@114 -- # nvmftestfini
00:21:14.091   06:31:31	-- nvmf/common.sh@476 -- # nvmfcleanup
00:21:14.091   06:31:31	-- nvmf/common.sh@116 -- # sync
00:21:14.350   06:31:31	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:21:14.350   06:31:31	-- nvmf/common.sh@119 -- # set +e
00:21:14.350   06:31:31	-- nvmf/common.sh@120 -- # for i in {1..20}
00:21:14.350   06:31:31	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:21:14.350  rmmod nvme_tcp
00:21:14.350  rmmod nvme_fabrics
00:21:14.350  rmmod nvme_keyring
00:21:14.350   06:31:31	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:21:14.350   06:31:31	-- nvmf/common.sh@123 -- # set -e
00:21:14.350   06:31:31	-- nvmf/common.sh@124 -- # return 0
00:21:14.350   06:31:31	-- nvmf/common.sh@477 -- # '[' -n 82963 ']'
00:21:14.350   06:31:31	-- nvmf/common.sh@478 -- # killprocess 82963
00:21:14.350   06:31:31	-- common/autotest_common.sh@936 -- # '[' -z 82963 ']'
00:21:14.350   06:31:31	-- common/autotest_common.sh@940 -- # kill -0 82963
00:21:14.350    06:31:31	-- common/autotest_common.sh@941 -- # uname
00:21:14.350   06:31:31	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:21:14.350    06:31:31	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82963
00:21:14.350  killing process with pid 82963
00:21:14.350   06:31:31	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:21:14.350   06:31:31	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:21:14.350   06:31:31	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 82963'
00:21:14.350   06:31:31	-- common/autotest_common.sh@955 -- # kill 82963
00:21:14.350   06:31:31	-- common/autotest_common.sh@960 -- # wait 82963
00:21:15.727   06:31:32	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:21:15.727   06:31:32	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:21:15.727   06:31:32	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:21:15.727   06:31:32	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:21:15.727   06:31:32	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:21:15.727   06:31:32	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:21:15.727   06:31:32	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:21:15.727    06:31:32	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:21:15.727   06:31:32	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:21:15.727  ************************************
00:21:15.727  END TEST nvmf_perf
00:21:15.727  ************************************
00:21:15.727  
00:21:15.727  real	0m50.938s
00:21:15.727  user	3m11.768s
00:21:15.727  sys	0m10.828s
00:21:15.727   06:31:32	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:21:15.727   06:31:32	-- common/autotest_common.sh@10 -- # set +x
00:21:15.727   06:31:32	-- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp
00:21:15.727   06:31:32	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:21:15.727   06:31:32	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:21:15.727   06:31:32	-- common/autotest_common.sh@10 -- # set +x
00:21:15.727  ************************************
00:21:15.727  START TEST nvmf_fio_host
00:21:15.727  ************************************
00:21:15.727   06:31:32	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp
00:21:15.987  * Looking for test storage...
00:21:15.987  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:21:15.987    06:31:32	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:21:15.987     06:31:32	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:21:15.987     06:31:32	-- common/autotest_common.sh@1690 -- # lcov --version
00:21:15.987    06:31:32	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:21:15.987    06:31:32	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:21:15.987    06:31:32	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:21:15.987    06:31:32	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:21:15.987    06:31:32	-- scripts/common.sh@335 -- # IFS=.-:
00:21:15.987    06:31:32	-- scripts/common.sh@335 -- # read -ra ver1
00:21:15.987    06:31:32	-- scripts/common.sh@336 -- # IFS=.-:
00:21:15.987    06:31:32	-- scripts/common.sh@336 -- # read -ra ver2
00:21:15.987    06:31:32	-- scripts/common.sh@337 -- # local 'op=<'
00:21:15.987    06:31:32	-- scripts/common.sh@339 -- # ver1_l=2
00:21:15.987    06:31:32	-- scripts/common.sh@340 -- # ver2_l=1
00:21:15.987    06:31:32	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:21:15.987    06:31:32	-- scripts/common.sh@343 -- # case "$op" in
00:21:15.987    06:31:32	-- scripts/common.sh@344 -- # : 1
00:21:15.987    06:31:32	-- scripts/common.sh@363 -- # (( v = 0 ))
00:21:15.987    06:31:32	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:21:15.987     06:31:32	-- scripts/common.sh@364 -- # decimal 1
00:21:15.987     06:31:32	-- scripts/common.sh@352 -- # local d=1
00:21:15.987     06:31:32	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:21:15.987     06:31:32	-- scripts/common.sh@354 -- # echo 1
00:21:15.987    06:31:32	-- scripts/common.sh@364 -- # ver1[v]=1
00:21:15.987     06:31:32	-- scripts/common.sh@365 -- # decimal 2
00:21:15.987     06:31:32	-- scripts/common.sh@352 -- # local d=2
00:21:15.987     06:31:32	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:21:15.987     06:31:32	-- scripts/common.sh@354 -- # echo 2
00:21:15.987    06:31:32	-- scripts/common.sh@365 -- # ver2[v]=2
00:21:15.987    06:31:32	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:21:15.987    06:31:32	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:21:15.987    06:31:32	-- scripts/common.sh@367 -- # return 0
00:21:15.987    06:31:32	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:21:15.987    06:31:32	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:21:15.987  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:15.987  		--rc genhtml_branch_coverage=1
00:21:15.987  		--rc genhtml_function_coverage=1
00:21:15.987  		--rc genhtml_legend=1
00:21:15.987  		--rc geninfo_all_blocks=1
00:21:15.987  		--rc geninfo_unexecuted_blocks=1
00:21:15.987  		
00:21:15.987  		'
00:21:15.987    06:31:32	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:21:15.987  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:15.987  		--rc genhtml_branch_coverage=1
00:21:15.987  		--rc genhtml_function_coverage=1
00:21:15.987  		--rc genhtml_legend=1
00:21:15.987  		--rc geninfo_all_blocks=1
00:21:15.987  		--rc geninfo_unexecuted_blocks=1
00:21:15.987  		
00:21:15.987  		'
00:21:15.987    06:31:32	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:21:15.987  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:15.987  		--rc genhtml_branch_coverage=1
00:21:15.987  		--rc genhtml_function_coverage=1
00:21:15.987  		--rc genhtml_legend=1
00:21:15.987  		--rc geninfo_all_blocks=1
00:21:15.987  		--rc geninfo_unexecuted_blocks=1
00:21:15.987  		
00:21:15.987  		'
00:21:15.987    06:31:32	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:21:15.987  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:15.987  		--rc genhtml_branch_coverage=1
00:21:15.987  		--rc genhtml_function_coverage=1
00:21:15.987  		--rc genhtml_legend=1
00:21:15.987  		--rc geninfo_all_blocks=1
00:21:15.987  		--rc geninfo_unexecuted_blocks=1
00:21:15.987  		
00:21:15.987  		'
00:21:15.987   06:31:32	-- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:21:15.987    06:31:32	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:21:15.987    06:31:32	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:21:15.987    06:31:32	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:21:15.987     06:31:32	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:15.987     06:31:32	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:15.987     06:31:32	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:15.987     06:31:32	-- paths/export.sh@5 -- # export PATH
00:21:15.987     06:31:32	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:15.987   06:31:32	-- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:21:15.987     06:31:32	-- nvmf/common.sh@7 -- # uname -s
00:21:15.987    06:31:32	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:21:15.987    06:31:32	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:21:15.987    06:31:32	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:21:15.987    06:31:32	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:21:15.987    06:31:32	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:21:15.987    06:31:32	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:21:15.987    06:31:32	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:21:15.987    06:31:32	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:21:15.987    06:31:32	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:21:15.987     06:31:32	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:21:15.987    06:31:32	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:21:15.987    06:31:32	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:21:15.987    06:31:32	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:21:15.987    06:31:32	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:21:15.987    06:31:32	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:21:15.987    06:31:32	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:21:15.987     06:31:32	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:21:15.987     06:31:32	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:21:15.987     06:31:32	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:21:15.987      06:31:32	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:15.987      06:31:32	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:15.987      06:31:32	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:15.987      06:31:32	-- paths/export.sh@5 -- # export PATH
00:21:15.987      06:31:32	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:15.987    06:31:32	-- nvmf/common.sh@46 -- # : 0
00:21:15.987    06:31:32	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:21:15.987    06:31:32	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:21:15.987    06:31:32	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:21:15.987    06:31:32	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:21:15.987    06:31:32	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:21:15.987    06:31:32	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:21:15.987    06:31:32	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:21:15.987    06:31:32	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:21:15.987   06:31:32	-- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:21:15.987   06:31:32	-- host/fio.sh@14 -- # nvmftestinit
00:21:15.987   06:31:32	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:21:15.987   06:31:32	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:21:15.987   06:31:32	-- nvmf/common.sh@436 -- # prepare_net_devs
00:21:15.987   06:31:32	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:21:15.987   06:31:32	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:21:15.987   06:31:32	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:21:15.987   06:31:32	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:21:15.987    06:31:32	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:21:15.987   06:31:32	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:21:15.987   06:31:32	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:21:15.987   06:31:32	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:21:15.987   06:31:32	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:21:15.987   06:31:32	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:21:15.987   06:31:32	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:21:15.987   06:31:32	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:21:15.987   06:31:32	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:21:15.987   06:31:32	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:21:15.987   06:31:32	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:21:15.987   06:31:32	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:21:15.987   06:31:32	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:21:15.987   06:31:32	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:21:15.987   06:31:32	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:21:15.987   06:31:32	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:21:15.987   06:31:32	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:21:15.987   06:31:32	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:21:15.987   06:31:32	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:21:15.987   06:31:32	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:21:15.987   06:31:32	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:21:15.988  Cannot find device "nvmf_tgt_br"
00:21:15.988   06:31:32	-- nvmf/common.sh@154 -- # true
00:21:15.988   06:31:32	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:21:15.988  Cannot find device "nvmf_tgt_br2"
00:21:15.988   06:31:32	-- nvmf/common.sh@155 -- # true
00:21:15.988   06:31:32	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:21:15.988   06:31:32	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:21:15.988  Cannot find device "nvmf_tgt_br"
00:21:15.988   06:31:32	-- nvmf/common.sh@157 -- # true
00:21:15.988   06:31:32	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:21:16.246  Cannot find device "nvmf_tgt_br2"
00:21:16.246   06:31:32	-- nvmf/common.sh@158 -- # true
00:21:16.246   06:31:32	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:21:16.246   06:31:33	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:21:16.246   06:31:33	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:21:16.246  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:21:16.246   06:31:33	-- nvmf/common.sh@161 -- # true
00:21:16.246   06:31:33	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:21:16.246  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:21:16.246   06:31:33	-- nvmf/common.sh@162 -- # true
00:21:16.246   06:31:33	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:21:16.246   06:31:33	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:21:16.246   06:31:33	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:21:16.246   06:31:33	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:21:16.246   06:31:33	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:21:16.246   06:31:33	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:21:16.246   06:31:33	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:21:16.246   06:31:33	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:21:16.246   06:31:33	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:21:16.246   06:31:33	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:21:16.246   06:31:33	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:21:16.246   06:31:33	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:21:16.246   06:31:33	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:21:16.246   06:31:33	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:21:16.246   06:31:33	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:21:16.246   06:31:33	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:21:16.246   06:31:33	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:21:16.246   06:31:33	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:21:16.246   06:31:33	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:21:16.246   06:31:33	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:21:16.246   06:31:33	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:21:16.246   06:31:33	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:21:16.246   06:31:33	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:21:16.505   06:31:33	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:21:16.505  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:21:16.505  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms
00:21:16.505  
00:21:16.505  --- 10.0.0.2 ping statistics ---
00:21:16.505  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:16.505  rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms
00:21:16.505   06:31:33	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:21:16.505  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:21:16.505  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms
00:21:16.505  
00:21:16.505  --- 10.0.0.3 ping statistics ---
00:21:16.505  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:16.505  rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms
00:21:16.505   06:31:33	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:21:16.505  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:21:16.505  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms
00:21:16.505  
00:21:16.505  --- 10.0.0.1 ping statistics ---
00:21:16.505  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:16.505  rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms
00:21:16.505   06:31:33	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:21:16.505   06:31:33	-- nvmf/common.sh@421 -- # return 0
00:21:16.505   06:31:33	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:21:16.505   06:31:33	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:21:16.505   06:31:33	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:21:16.505   06:31:33	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:21:16.505   06:31:33	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:21:16.505   06:31:33	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:21:16.505   06:31:33	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:21:16.505   06:31:33	-- host/fio.sh@16 -- # [[ y != y ]]
00:21:16.505   06:31:33	-- host/fio.sh@21 -- # timing_enter start_nvmf_tgt
00:21:16.505   06:31:33	-- common/autotest_common.sh@722 -- # xtrace_disable
00:21:16.505   06:31:33	-- common/autotest_common.sh@10 -- # set +x
00:21:16.505   06:31:33	-- host/fio.sh@24 -- # nvmfpid=83934
00:21:16.506   06:31:33	-- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:21:16.506   06:31:33	-- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:21:16.506   06:31:33	-- host/fio.sh@28 -- # waitforlisten 83934
00:21:16.506   06:31:33	-- common/autotest_common.sh@829 -- # '[' -z 83934 ']'
00:21:16.506   06:31:33	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:21:16.506   06:31:33	-- common/autotest_common.sh@834 -- # local max_retries=100
00:21:16.506   06:31:33	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:21:16.506  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:21:16.506   06:31:33	-- common/autotest_common.sh@838 -- # xtrace_disable
00:21:16.506   06:31:33	-- common/autotest_common.sh@10 -- # set +x
00:21:16.506  [2024-12-16 06:31:33.325311] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:21:16.506  [2024-12-16 06:31:33.325396] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:21:16.506  [2024-12-16 06:31:33.468197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:21:16.765  [2024-12-16 06:31:33.565932] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:21:16.765  [2024-12-16 06:31:33.566120] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:21:16.765  [2024-12-16 06:31:33.566138] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:21:16.765  [2024-12-16 06:31:33.566150] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:21:16.765  [2024-12-16 06:31:33.566696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:21:16.765  [2024-12-16 06:31:33.566795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:21:16.765  [2024-12-16 06:31:33.566847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:21:16.765  [2024-12-16 06:31:33.566854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:21:17.333   06:31:34	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:21:17.333   06:31:34	-- common/autotest_common.sh@862 -- # return 0
00:21:17.333   06:31:34	-- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:21:17.592  [2024-12-16 06:31:34.528215] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:21:17.592   06:31:34	-- host/fio.sh@30 -- # timing_exit start_nvmf_tgt
00:21:17.592   06:31:34	-- common/autotest_common.sh@728 -- # xtrace_disable
00:21:17.592   06:31:34	-- common/autotest_common.sh@10 -- # set +x
00:21:17.851   06:31:34	-- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1
00:21:17.851  Malloc1
00:21:18.110   06:31:34	-- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:21:18.110   06:31:35	-- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:21:18.368   06:31:35	-- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:21:18.626  [2024-12-16 06:31:35.469314] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:21:18.626   06:31:35	-- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:21:18.884   06:31:35	-- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme
00:21:18.884   06:31:35	-- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096
00:21:18.884   06:31:35	-- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096
00:21:18.884   06:31:35	-- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio
00:21:18.884   06:31:35	-- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:21:18.884   06:31:35	-- common/autotest_common.sh@1328 -- # local sanitizers
00:21:18.884   06:31:35	-- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:21:18.884   06:31:35	-- common/autotest_common.sh@1330 -- # shift
00:21:18.884   06:31:35	-- common/autotest_common.sh@1332 -- # local asan_lib=
00:21:18.884   06:31:35	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:21:18.884    06:31:35	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:21:18.884    06:31:35	-- common/autotest_common.sh@1334 -- # grep libasan
00:21:18.884    06:31:35	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:21:18.884   06:31:35	-- common/autotest_common.sh@1334 -- # asan_lib=
00:21:18.884   06:31:35	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:21:18.884   06:31:35	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:21:18.884    06:31:35	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:21:18.884    06:31:35	-- common/autotest_common.sh@1334 -- # grep libclang_rt.asan
00:21:18.884    06:31:35	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:21:18.884   06:31:35	-- common/autotest_common.sh@1334 -- # asan_lib=
00:21:18.884   06:31:35	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:21:18.884   06:31:35	-- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:21:18.885   06:31:35	-- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096
00:21:19.143  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:21:19.143  fio-3.35
00:21:19.143  Starting 1 thread
00:21:21.676  
00:21:21.676  test: (groupid=0, jobs=1): err= 0: pid=84060: Mon Dec 16 06:31:38 2024
00:21:21.676    read: IOPS=11.1k, BW=43.4MiB/s (45.5MB/s)(87.0MiB/2005msec)
00:21:21.676      slat (nsec): min=1723, max=339974, avg=2422.57, stdev=3801.72
00:21:21.676      clat (usec): min=3195, max=13651, avg=6154.96, stdev=659.55
00:21:21.676       lat (usec): min=3229, max=13654, avg=6157.39, stdev=659.68
00:21:21.676      clat percentiles (usec):
00:21:21.676       |  1.00th=[ 5014],  5.00th=[ 5342], 10.00th=[ 5473], 20.00th=[ 5669],
00:21:21.676       | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6194],
00:21:21.676       | 70.00th=[ 6390], 80.00th=[ 6587], 90.00th=[ 6849], 95.00th=[ 7046],
00:21:21.676       | 99.00th=[ 8356], 99.50th=[ 9503], 99.90th=[12125], 99.95th=[13042],
00:21:21.676       | 99.99th=[13698]
00:21:21.676     bw (  KiB/s): min=42608, max=45848, per=99.92%, avg=44396.00, stdev=1599.71, samples=4
00:21:21.676     iops        : min=10652, max=11462, avg=11099.00, stdev=399.93, samples=4
00:21:21.676    write: IOPS=11.1k, BW=43.2MiB/s (45.3MB/s)(86.7MiB/2005msec); 0 zone resets
00:21:21.676      slat (nsec): min=1800, max=225207, avg=2491.57, stdev=2418.52
00:21:21.676      clat (usec): min=2316, max=9752, avg=5344.56, stdev=518.98
00:21:21.676       lat (usec): min=2328, max=9755, avg=5347.05, stdev=519.03
00:21:21.676      clat percentiles (usec):
00:21:21.676       |  1.00th=[ 4359],  5.00th=[ 4621], 10.00th=[ 4817], 20.00th=[ 4948],
00:21:21.676       | 30.00th=[ 5080], 40.00th=[ 5211], 50.00th=[ 5342], 60.00th=[ 5407],
00:21:21.676       | 70.00th=[ 5538], 80.00th=[ 5669], 90.00th=[ 5866], 95.00th=[ 6063],
00:21:21.676       | 99.00th=[ 6783], 99.50th=[ 8225], 99.90th=[ 9372], 99.95th=[ 9634],
00:21:21.676       | 99.99th=[ 9765]
00:21:21.676     bw (  KiB/s): min=42904, max=45128, per=100.00%, avg=44286.00, stdev=997.52, samples=4
00:21:21.676     iops        : min=10726, max=11282, avg=11071.50, stdev=249.38, samples=4
00:21:21.676    lat (msec)   : 4=0.21%, 10=99.64%, 20=0.16%
00:21:21.676    cpu          : usr=64.52%, sys=24.50%, ctx=51, majf=0, minf=5
00:21:21.676    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
00:21:21.676       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:21:21.676       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:21:21.676       issued rwts: total=22272,22196,0,0 short=0,0,0,0 dropped=0,0,0,0
00:21:21.676       latency   : target=0, window=0, percentile=100.00%, depth=128
00:21:21.676  
00:21:21.676  Run status group 0 (all jobs):
00:21:21.676     READ: bw=43.4MiB/s (45.5MB/s), 43.4MiB/s-43.4MiB/s (45.5MB/s-45.5MB/s), io=87.0MiB (91.2MB), run=2005-2005msec
00:21:21.676    WRITE: bw=43.2MiB/s (45.3MB/s), 43.2MiB/s-43.2MiB/s (45.3MB/s-45.3MB/s), io=86.7MiB (90.9MB), run=2005-2005msec
00:21:21.676   06:31:38	-- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1'
00:21:21.676   06:31:38	-- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1'
00:21:21.676   06:31:38	-- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio
00:21:21.676   06:31:38	-- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:21:21.676   06:31:38	-- common/autotest_common.sh@1328 -- # local sanitizers
00:21:21.677   06:31:38	-- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:21:21.677   06:31:38	-- common/autotest_common.sh@1330 -- # shift
00:21:21.677   06:31:38	-- common/autotest_common.sh@1332 -- # local asan_lib=
00:21:21.677   06:31:38	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:21:21.677    06:31:38	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:21:21.677    06:31:38	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:21:21.677    06:31:38	-- common/autotest_common.sh@1334 -- # grep libasan
00:21:21.677   06:31:38	-- common/autotest_common.sh@1334 -- # asan_lib=
00:21:21.677   06:31:38	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:21:21.677   06:31:38	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:21:21.677    06:31:38	-- common/autotest_common.sh@1334 -- # grep libclang_rt.asan
00:21:21.677    06:31:38	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:21:21.677    06:31:38	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:21:21.677   06:31:38	-- common/autotest_common.sh@1334 -- # asan_lib=
00:21:21.677   06:31:38	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:21:21.677   06:31:38	-- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:21:21.677   06:31:38	-- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1'
00:21:21.677  test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128
00:21:21.677  fio-3.35
00:21:21.677  Starting 1 thread
00:21:24.212  
00:21:24.212  test: (groupid=0, jobs=1): err= 0: pid=84109: Mon Dec 16 06:31:40 2024
00:21:24.212    read: IOPS=9303, BW=145MiB/s (152MB/s)(291MiB/2005msec)
00:21:24.212      slat (usec): min=2, max=135, avg= 3.36, stdev= 2.54
00:21:24.212      clat (usec): min=1981, max=15565, avg=8300.28, stdev=2092.31
00:21:24.212       lat (usec): min=1984, max=15567, avg=8303.64, stdev=2092.40
00:21:24.212      clat percentiles (usec):
00:21:24.212       |  1.00th=[ 4178],  5.00th=[ 5080], 10.00th=[ 5604], 20.00th=[ 6259],
00:21:24.212       | 30.00th=[ 6980], 40.00th=[ 7635], 50.00th=[ 8291], 60.00th=[ 8848],
00:21:24.212       | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[10683], 95.00th=[11469],
00:21:24.212       | 99.00th=[13304], 99.50th=[14353], 99.90th=[15270], 99.95th=[15401],
00:21:24.212       | 99.99th=[15533]
00:21:24.212     bw (  KiB/s): min=67456, max=85536, per=48.94%, avg=72856.00, stdev=8521.07, samples=4
00:21:24.212     iops        : min= 4216, max= 5346, avg=4553.50, stdev=532.57, samples=4
00:21:24.212    write: IOPS=5635, BW=88.1MiB/s (92.3MB/s)(149MiB/1695msec); 0 zone resets
00:21:24.212      slat (usec): min=29, max=349, avg=33.34, stdev= 8.61
00:21:24.212      clat (usec): min=3537, max=19742, avg=9886.77, stdev=1852.92
00:21:24.212       lat (usec): min=3568, max=19771, avg=9920.11, stdev=1853.05
00:21:24.212      clat percentiles (usec):
00:21:24.212       |  1.00th=[ 6718],  5.00th=[ 7373], 10.00th=[ 7832], 20.00th=[ 8356],
00:21:24.212       | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10028],
00:21:24.212       | 70.00th=[10552], 80.00th=[11207], 90.00th=[12387], 95.00th=[13566],
00:21:24.212       | 99.00th=[15139], 99.50th=[15401], 99.90th=[16450], 99.95th=[19530],
00:21:24.212       | 99.99th=[19792]
00:21:24.212     bw (  KiB/s): min=70624, max=88704, per=84.16%, avg=75888.00, stdev=8582.31, samples=4
00:21:24.212     iops        : min= 4414, max= 5544, avg=4743.00, stdev=536.39, samples=4
00:21:24.212    lat (msec)   : 2=0.01%, 4=0.43%, 10=68.76%, 20=30.81%
00:21:24.212    cpu          : usr=71.27%, sys=18.80%, ctx=35, majf=0, minf=2
00:21:24.212    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4%
00:21:24.212       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:21:24.212       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:21:24.212       issued rwts: total=18654,9552,0,0 short=0,0,0,0 dropped=0,0,0,0
00:21:24.212       latency   : target=0, window=0, percentile=100.00%, depth=128
00:21:24.212  
00:21:24.212  Run status group 0 (all jobs):
00:21:24.212     READ: bw=145MiB/s (152MB/s), 145MiB/s-145MiB/s (152MB/s-152MB/s), io=291MiB (306MB), run=2005-2005msec
00:21:24.212    WRITE: bw=88.1MiB/s (92.3MB/s), 88.1MiB/s-88.1MiB/s (92.3MB/s-92.3MB/s), io=149MiB (156MB), run=1695-1695msec
00:21:24.212   06:31:40	-- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:21:24.212   06:31:40	-- host/fio.sh@49 -- # '[' 1 -eq 1 ']'
00:21:24.212   06:31:40	-- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs))
00:21:24.212    06:31:40	-- host/fio.sh@51 -- # get_nvme_bdfs
00:21:24.212    06:31:40	-- common/autotest_common.sh@1508 -- # bdfs=()
00:21:24.212    06:31:40	-- common/autotest_common.sh@1508 -- # local bdfs
00:21:24.212    06:31:40	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:21:24.212     06:31:40	-- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:21:24.212     06:31:40	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:21:24.212    06:31:41	-- common/autotest_common.sh@1510 -- # (( 2 == 0 ))
00:21:24.212    06:31:41	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0
00:21:24.212   06:31:41	-- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2
00:21:24.471  Nvme0n1
00:21:24.471    06:31:41	-- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0
00:21:24.730   06:31:41	-- host/fio.sh@53 -- # ls_guid=336516ef-2eee-45d0-8337-ee8d11e360b8
00:21:24.730   06:31:41	-- host/fio.sh@54 -- # get_lvs_free_mb 336516ef-2eee-45d0-8337-ee8d11e360b8
00:21:24.730   06:31:41	-- common/autotest_common.sh@1353 -- # local lvs_uuid=336516ef-2eee-45d0-8337-ee8d11e360b8
00:21:24.730   06:31:41	-- common/autotest_common.sh@1354 -- # local lvs_info
00:21:24.730   06:31:41	-- common/autotest_common.sh@1355 -- # local fc
00:21:24.730   06:31:41	-- common/autotest_common.sh@1356 -- # local cs
00:21:24.730    06:31:41	-- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:21:24.989   06:31:41	-- common/autotest_common.sh@1357 -- # lvs_info='[
00:21:24.989    {
00:21:24.989      "base_bdev": "Nvme0n1",
00:21:24.989      "block_size": 4096,
00:21:24.989      "cluster_size": 1073741824,
00:21:24.989      "free_clusters": 4,
00:21:24.989      "name": "lvs_0",
00:21:24.989      "total_data_clusters": 4,
00:21:24.989      "uuid": "336516ef-2eee-45d0-8337-ee8d11e360b8"
00:21:24.989    }
00:21:24.989  ]'
00:21:24.989    06:31:41	-- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="336516ef-2eee-45d0-8337-ee8d11e360b8") .free_clusters'
00:21:24.989   06:31:41	-- common/autotest_common.sh@1358 -- # fc=4
00:21:24.989    06:31:41	-- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="336516ef-2eee-45d0-8337-ee8d11e360b8") .cluster_size'
00:21:24.989   06:31:41	-- common/autotest_common.sh@1359 -- # cs=1073741824
00:21:24.989   06:31:41	-- common/autotest_common.sh@1362 -- # free_mb=4096
00:21:24.989  4096
00:21:24.989   06:31:41	-- common/autotest_common.sh@1363 -- # echo 4096
00:21:24.989   06:31:41	-- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096
00:21:25.248  0944ef99-7b19-4e5d-9332-12748d5df328
00:21:25.248   06:31:42	-- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001
00:21:25.507   06:31:42	-- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0
00:21:25.765   06:31:42	-- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420
00:21:26.038   06:31:42	-- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 	traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096
00:21:26.038   06:31:42	-- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 	traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096
00:21:26.038   06:31:42	-- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio
00:21:26.038   06:31:42	-- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:21:26.038   06:31:42	-- common/autotest_common.sh@1328 -- # local sanitizers
00:21:26.038   06:31:42	-- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:21:26.038   06:31:42	-- common/autotest_common.sh@1330 -- # shift
00:21:26.038   06:31:42	-- common/autotest_common.sh@1332 -- # local asan_lib=
00:21:26.038   06:31:42	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:21:26.038    06:31:42	-- common/autotest_common.sh@1334 -- # grep libasan
00:21:26.038    06:31:42	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:21:26.038    06:31:42	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:21:26.038   06:31:42	-- common/autotest_common.sh@1334 -- # asan_lib=
00:21:26.038   06:31:42	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:21:26.038   06:31:42	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:21:26.038    06:31:42	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:21:26.038    06:31:42	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:21:26.038    06:31:42	-- common/autotest_common.sh@1334 -- # grep libclang_rt.asan
00:21:26.038   06:31:42	-- common/autotest_common.sh@1334 -- # asan_lib=
00:21:26.038   06:31:42	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:21:26.038   06:31:42	-- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:21:26.038   06:31:42	-- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 	traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096
00:21:26.308  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:21:26.308  fio-3.35
00:21:26.308  Starting 1 thread
00:21:28.844  
00:21:28.844  test: (groupid=0, jobs=1): err= 0: pid=84260: Mon Dec 16 06:31:45 2024
00:21:28.844    read: IOPS=7349, BW=28.7MiB/s (30.1MB/s)(57.6MiB/2007msec)
00:21:28.844      slat (nsec): min=1754, max=230912, avg=2750.86, stdev=3344.48
00:21:28.844      clat (usec): min=3541, max=16755, avg=9390.78, stdev=996.29
00:21:28.844       lat (usec): min=3546, max=16757, avg=9393.53, stdev=996.17
00:21:28.844      clat percentiles (usec):
00:21:28.844       |  1.00th=[ 7439],  5.00th=[ 7898], 10.00th=[ 8160], 20.00th=[ 8586],
00:21:28.844       | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634],
00:21:28.844       | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10683], 95.00th=[11076],
00:21:28.844       | 99.00th=[11731], 99.50th=[11994], 99.90th=[14615], 99.95th=[15926],
00:21:28.844       | 99.99th=[16712]
00:21:28.844     bw (  KiB/s): min=28288, max=30224, per=99.83%, avg=29348.00, stdev=805.17, samples=4
00:21:28.844     iops        : min= 7072, max= 7556, avg=7337.00, stdev=201.29, samples=4
00:21:28.844    write: IOPS=7309, BW=28.6MiB/s (29.9MB/s)(57.3MiB/2007msec); 0 zone resets
00:21:28.844      slat (nsec): min=1842, max=185937, avg=2884.24, stdev=2887.51
00:21:28.844      clat (usec): min=1657, max=14358, avg=8020.70, stdev=827.46
00:21:28.844       lat (usec): min=1664, max=14360, avg=8023.59, stdev=827.37
00:21:28.844      clat percentiles (usec):
00:21:28.844       |  1.00th=[ 6259],  5.00th=[ 6783], 10.00th=[ 7046], 20.00th=[ 7373],
00:21:28.844       | 30.00th=[ 7570], 40.00th=[ 7832], 50.00th=[ 8029], 60.00th=[ 8225],
00:21:28.844       | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 9110], 95.00th=[ 9372],
00:21:28.844       | 99.00th=[ 9896], 99.50th=[10159], 99.90th=[11863], 99.95th=[13173],
00:21:28.844       | 99.99th=[14353]
00:21:28.844     bw (  KiB/s): min=28264, max=30688, per=99.99%, avg=29238.00, stdev=1085.93, samples=4
00:21:28.844     iops        : min= 7066, max= 7672, avg=7309.50, stdev=271.48, samples=4
00:21:28.844    lat (msec)   : 2=0.01%, 4=0.07%, 10=86.70%, 20=13.22%
00:21:28.844    cpu          : usr=69.54%, sys=22.53%, ctx=8, majf=0, minf=5
00:21:28.844    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8%
00:21:28.844       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:21:28.844       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:21:28.844       issued rwts: total=14750,14671,0,0 short=0,0,0,0 dropped=0,0,0,0
00:21:28.844       latency   : target=0, window=0, percentile=100.00%, depth=128
00:21:28.844  
00:21:28.844  Run status group 0 (all jobs):
00:21:28.844     READ: bw=28.7MiB/s (30.1MB/s), 28.7MiB/s-28.7MiB/s (30.1MB/s-30.1MB/s), io=57.6MiB (60.4MB), run=2007-2007msec
00:21:28.844    WRITE: bw=28.6MiB/s (29.9MB/s), 28.6MiB/s-28.6MiB/s (29.9MB/s-29.9MB/s), io=57.3MiB (60.1MB), run=2007-2007msec
00:21:28.844   06:31:45	-- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2
00:21:28.844    06:31:45	-- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0
00:21:28.844   06:31:45	-- host/fio.sh@64 -- # ls_nested_guid=8513c284-dc6b-4e83-a542-bfad27fe494d
00:21:28.844   06:31:45	-- host/fio.sh@65 -- # get_lvs_free_mb 8513c284-dc6b-4e83-a542-bfad27fe494d
00:21:28.844   06:31:45	-- common/autotest_common.sh@1353 -- # local lvs_uuid=8513c284-dc6b-4e83-a542-bfad27fe494d
00:21:28.844   06:31:45	-- common/autotest_common.sh@1354 -- # local lvs_info
00:21:28.844   06:31:45	-- common/autotest_common.sh@1355 -- # local fc
00:21:28.844   06:31:45	-- common/autotest_common.sh@1356 -- # local cs
00:21:28.844    06:31:45	-- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:21:29.412   06:31:46	-- common/autotest_common.sh@1357 -- # lvs_info='[
00:21:29.412    {
00:21:29.412      "base_bdev": "Nvme0n1",
00:21:29.412      "block_size": 4096,
00:21:29.412      "cluster_size": 1073741824,
00:21:29.412      "free_clusters": 0,
00:21:29.412      "name": "lvs_0",
00:21:29.412      "total_data_clusters": 4,
00:21:29.412      "uuid": "336516ef-2eee-45d0-8337-ee8d11e360b8"
00:21:29.412    },
00:21:29.412    {
00:21:29.412      "base_bdev": "0944ef99-7b19-4e5d-9332-12748d5df328",
00:21:29.412      "block_size": 4096,
00:21:29.412      "cluster_size": 4194304,
00:21:29.412      "free_clusters": 1022,
00:21:29.412      "name": "lvs_n_0",
00:21:29.412      "total_data_clusters": 1022,
00:21:29.412      "uuid": "8513c284-dc6b-4e83-a542-bfad27fe494d"
00:21:29.412    }
00:21:29.412  ]'
00:21:29.412    06:31:46	-- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="8513c284-dc6b-4e83-a542-bfad27fe494d") .free_clusters'
00:21:29.412   06:31:46	-- common/autotest_common.sh@1358 -- # fc=1022
00:21:29.412    06:31:46	-- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="8513c284-dc6b-4e83-a542-bfad27fe494d") .cluster_size'
00:21:29.412   06:31:46	-- common/autotest_common.sh@1359 -- # cs=4194304
00:21:29.412   06:31:46	-- common/autotest_common.sh@1362 -- # free_mb=4088
00:21:29.412  4088
00:21:29.412   06:31:46	-- common/autotest_common.sh@1363 -- # echo 4088
00:21:29.412   06:31:46	-- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088
00:21:29.671  12b27f3e-5c7f-4613-9b81-9e7e6c5f4d09
00:21:29.671   06:31:46	-- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001
00:21:29.929   06:31:46	-- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0
00:21:29.929   06:31:46	-- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420
00:21:30.188   06:31:47	-- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 	traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096
00:21:30.188   06:31:47	-- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 	traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096
00:21:30.188   06:31:47	-- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio
00:21:30.188   06:31:47	-- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:21:30.188   06:31:47	-- common/autotest_common.sh@1328 -- # local sanitizers
00:21:30.188   06:31:47	-- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:21:30.188   06:31:47	-- common/autotest_common.sh@1330 -- # shift
00:21:30.188   06:31:47	-- common/autotest_common.sh@1332 -- # local asan_lib=
00:21:30.188   06:31:47	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:21:30.188    06:31:47	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:21:30.188    06:31:47	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:21:30.188    06:31:47	-- common/autotest_common.sh@1334 -- # grep libasan
00:21:30.188   06:31:47	-- common/autotest_common.sh@1334 -- # asan_lib=
00:21:30.188   06:31:47	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:21:30.188   06:31:47	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:21:30.188    06:31:47	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:21:30.188    06:31:47	-- common/autotest_common.sh@1334 -- # grep libclang_rt.asan
00:21:30.188    06:31:47	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:21:30.188   06:31:47	-- common/autotest_common.sh@1334 -- # asan_lib=
00:21:30.188   06:31:47	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:21:30.188   06:31:47	-- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:21:30.188   06:31:47	-- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 	traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096
00:21:30.453  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:21:30.453  fio-3.35
00:21:30.453  Starting 1 thread
00:21:33.003  
00:21:33.003  test: (groupid=0, jobs=1): err= 0: pid=84381: Mon Dec 16 06:31:49 2024
00:21:33.003    read: IOPS=6373, BW=24.9MiB/s (26.1MB/s)(50.0MiB/2008msec)
00:21:33.003      slat (nsec): min=1792, max=205728, avg=2955.54, stdev=3753.97
00:21:33.003      clat (usec): min=4402, max=18526, avg=10798.97, stdev=1107.05
00:21:33.003       lat (usec): min=4411, max=18528, avg=10801.93, stdev=1106.98
00:21:33.003      clat percentiles (usec):
00:21:33.003       |  1.00th=[ 8455],  5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9896],
00:21:33.003       | 30.00th=[10159], 40.00th=[10421], 50.00th=[10814], 60.00th=[11076],
00:21:33.003       | 70.00th=[11338], 80.00th=[11731], 90.00th=[12256], 95.00th=[12649],
00:21:33.003       | 99.00th=[13435], 99.50th=[13960], 99.90th=[16909], 99.95th=[17433],
00:21:33.003       | 99.99th=[18482]
00:21:33.003     bw (  KiB/s): min=24392, max=25872, per=99.87%, avg=25462.00, stdev=716.74, samples=4
00:21:33.003     iops        : min= 6098, max= 6468, avg=6365.50, stdev=179.19, samples=4
00:21:33.003    write: IOPS=6374, BW=24.9MiB/s (26.1MB/s)(50.0MiB/2008msec); 0 zone resets
00:21:33.003      slat (nsec): min=1895, max=163286, avg=3057.31, stdev=3073.74
00:21:33.003      clat (usec): min=2058, max=16709, avg=9187.98, stdev=911.04
00:21:33.003       lat (usec): min=2070, max=16712, avg=9191.03, stdev=911.04
00:21:33.003      clat percentiles (usec):
00:21:33.003       |  1.00th=[ 7111],  5.00th=[ 7832], 10.00th=[ 8094], 20.00th=[ 8455],
00:21:33.003       | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9372],
00:21:33.003       | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10552],
00:21:33.003       | 99.00th=[11207], 99.50th=[11600], 99.90th=[14615], 99.95th=[15401],
00:21:33.003       | 99.99th=[15926]
00:21:33.003     bw (  KiB/s): min=25184, max=25856, per=99.93%, avg=25478.00, stdev=280.14, samples=4
00:21:33.003     iops        : min= 6296, max= 6464, avg=6369.50, stdev=70.04, samples=4
00:21:33.003    lat (msec)   : 4=0.04%, 10=53.23%, 20=46.73%
00:21:33.003    cpu          : usr=69.66%, sys=22.22%, ctx=531, majf=0, minf=5
00:21:33.003    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8%
00:21:33.003       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:21:33.003       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:21:33.003       issued rwts: total=12798,12799,0,0 short=0,0,0,0 dropped=0,0,0,0
00:21:33.003       latency   : target=0, window=0, percentile=100.00%, depth=128
00:21:33.003  
00:21:33.003  Run status group 0 (all jobs):
00:21:33.003     READ: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=50.0MiB (52.4MB), run=2008-2008msec
00:21:33.003    WRITE: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=50.0MiB (52.4MB), run=2008-2008msec
00:21:33.003   06:31:49	-- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3
00:21:33.003   06:31:49	-- host/fio.sh@74 -- # sync
00:21:33.003   06:31:49	-- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0
00:21:33.261   06:31:50	-- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0
00:21:33.520   06:31:50	-- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0
00:21:33.778   06:31:50	-- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0
00:21:34.037   06:31:50	-- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0
00:21:34.605   06:31:51	-- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:21:34.605   06:31:51	-- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state
00:21:34.605   06:31:51	-- host/fio.sh@86 -- # nvmftestfini
00:21:34.605   06:31:51	-- nvmf/common.sh@476 -- # nvmfcleanup
00:21:34.605   06:31:51	-- nvmf/common.sh@116 -- # sync
00:21:34.605   06:31:51	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:21:34.605   06:31:51	-- nvmf/common.sh@119 -- # set +e
00:21:34.605   06:31:51	-- nvmf/common.sh@120 -- # for i in {1..20}
00:21:34.605   06:31:51	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:21:34.605  rmmod nvme_tcp
00:21:34.605  rmmod nvme_fabrics
00:21:34.605  rmmod nvme_keyring
00:21:34.605   06:31:51	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:21:34.605   06:31:51	-- nvmf/common.sh@123 -- # set -e
00:21:34.605   06:31:51	-- nvmf/common.sh@124 -- # return 0
00:21:34.605   06:31:51	-- nvmf/common.sh@477 -- # '[' -n 83934 ']'
00:21:34.605   06:31:51	-- nvmf/common.sh@478 -- # killprocess 83934
00:21:34.605   06:31:51	-- common/autotest_common.sh@936 -- # '[' -z 83934 ']'
00:21:34.605   06:31:51	-- common/autotest_common.sh@940 -- # kill -0 83934
00:21:34.605    06:31:51	-- common/autotest_common.sh@941 -- # uname
00:21:34.605   06:31:51	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:21:34.605    06:31:51	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83934
00:21:34.605  killing process with pid 83934
00:21:34.605   06:31:51	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:21:34.605   06:31:51	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:21:34.605   06:31:51	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 83934'
00:21:34.605   06:31:51	-- common/autotest_common.sh@955 -- # kill 83934
00:21:34.605   06:31:51	-- common/autotest_common.sh@960 -- # wait 83934
00:21:34.864   06:31:51	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:21:34.864   06:31:51	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:21:34.864   06:31:51	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:21:34.864   06:31:51	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:21:34.864   06:31:51	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:21:34.864   06:31:51	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:21:34.864   06:31:51	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:21:34.864    06:31:51	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:21:34.864   06:31:51	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:21:34.864  
00:21:34.864  real	0m19.162s
00:21:34.864  user	1m23.462s
00:21:34.864  sys	0m4.511s
00:21:34.864   06:31:51	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:21:35.123  ************************************
00:21:35.123  END TEST nvmf_fio_host
00:21:35.123  ************************************
00:21:35.123   06:31:51	-- common/autotest_common.sh@10 -- # set +x
00:21:35.123   06:31:51	-- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp
00:21:35.123   06:31:51	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:21:35.123   06:31:51	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:21:35.123   06:31:51	-- common/autotest_common.sh@10 -- # set +x
00:21:35.123  ************************************
00:21:35.123  START TEST nvmf_failover
00:21:35.123  ************************************
00:21:35.123   06:31:51	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp
00:21:35.123  * Looking for test storage...
00:21:35.123  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:21:35.123    06:31:51	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:21:35.123     06:31:51	-- common/autotest_common.sh@1690 -- # lcov --version
00:21:35.123     06:31:51	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:21:35.123    06:31:52	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:21:35.123    06:31:52	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:21:35.123    06:31:52	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:21:35.123    06:31:52	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:21:35.123    06:31:52	-- scripts/common.sh@335 -- # IFS=.-:
00:21:35.123    06:31:52	-- scripts/common.sh@335 -- # read -ra ver1
00:21:35.123    06:31:52	-- scripts/common.sh@336 -- # IFS=.-:
00:21:35.123    06:31:52	-- scripts/common.sh@336 -- # read -ra ver2
00:21:35.123    06:31:52	-- scripts/common.sh@337 -- # local 'op=<'
00:21:35.123    06:31:52	-- scripts/common.sh@339 -- # ver1_l=2
00:21:35.123    06:31:52	-- scripts/common.sh@340 -- # ver2_l=1
00:21:35.123    06:31:52	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:21:35.123    06:31:52	-- scripts/common.sh@343 -- # case "$op" in
00:21:35.123    06:31:52	-- scripts/common.sh@344 -- # : 1
00:21:35.123    06:31:52	-- scripts/common.sh@363 -- # (( v = 0 ))
00:21:35.123    06:31:52	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:21:35.123     06:31:52	-- scripts/common.sh@364 -- # decimal 1
00:21:35.123     06:31:52	-- scripts/common.sh@352 -- # local d=1
00:21:35.123     06:31:52	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:21:35.123     06:31:52	-- scripts/common.sh@354 -- # echo 1
00:21:35.123    06:31:52	-- scripts/common.sh@364 -- # ver1[v]=1
00:21:35.123     06:31:52	-- scripts/common.sh@365 -- # decimal 2
00:21:35.123     06:31:52	-- scripts/common.sh@352 -- # local d=2
00:21:35.123     06:31:52	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:21:35.123     06:31:52	-- scripts/common.sh@354 -- # echo 2
00:21:35.123    06:31:52	-- scripts/common.sh@365 -- # ver2[v]=2
00:21:35.123    06:31:52	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:21:35.123    06:31:52	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:21:35.123    06:31:52	-- scripts/common.sh@367 -- # return 0
00:21:35.123    06:31:52	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:21:35.123    06:31:52	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:21:35.123  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:35.123  		--rc genhtml_branch_coverage=1
00:21:35.123  		--rc genhtml_function_coverage=1
00:21:35.123  		--rc genhtml_legend=1
00:21:35.123  		--rc geninfo_all_blocks=1
00:21:35.123  		--rc geninfo_unexecuted_blocks=1
00:21:35.123  		
00:21:35.123  		'
00:21:35.123    06:31:52	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:21:35.123  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:35.123  		--rc genhtml_branch_coverage=1
00:21:35.123  		--rc genhtml_function_coverage=1
00:21:35.123  		--rc genhtml_legend=1
00:21:35.123  		--rc geninfo_all_blocks=1
00:21:35.123  		--rc geninfo_unexecuted_blocks=1
00:21:35.123  		
00:21:35.123  		'
00:21:35.123    06:31:52	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:21:35.123  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:35.123  		--rc genhtml_branch_coverage=1
00:21:35.123  		--rc genhtml_function_coverage=1
00:21:35.123  		--rc genhtml_legend=1
00:21:35.123  		--rc geninfo_all_blocks=1
00:21:35.123  		--rc geninfo_unexecuted_blocks=1
00:21:35.123  		
00:21:35.123  		'
00:21:35.123    06:31:52	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:21:35.123  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:35.123  		--rc genhtml_branch_coverage=1
00:21:35.123  		--rc genhtml_function_coverage=1
00:21:35.123  		--rc genhtml_legend=1
00:21:35.123  		--rc geninfo_all_blocks=1
00:21:35.123  		--rc geninfo_unexecuted_blocks=1
00:21:35.123  		
00:21:35.123  		'
00:21:35.123   06:31:52	-- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:21:35.123     06:31:52	-- nvmf/common.sh@7 -- # uname -s
00:21:35.123    06:31:52	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:21:35.123    06:31:52	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:21:35.123    06:31:52	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:21:35.123    06:31:52	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:21:35.123    06:31:52	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:21:35.123    06:31:52	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:21:35.123    06:31:52	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:21:35.123    06:31:52	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:21:35.123    06:31:52	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:21:35.123     06:31:52	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:21:35.123    06:31:52	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:21:35.123    06:31:52	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:21:35.123    06:31:52	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:21:35.123    06:31:52	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:21:35.123    06:31:52	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:21:35.123    06:31:52	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:21:35.123     06:31:52	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:21:35.124     06:31:52	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:21:35.124     06:31:52	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:21:35.124      06:31:52	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:35.124      06:31:52	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:35.124      06:31:52	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:35.124      06:31:52	-- paths/export.sh@5 -- # export PATH
00:21:35.124      06:31:52	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:35.124    06:31:52	-- nvmf/common.sh@46 -- # : 0
00:21:35.124    06:31:52	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:21:35.124    06:31:52	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:21:35.124    06:31:52	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:21:35.124    06:31:52	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:21:35.124    06:31:52	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:21:35.124    06:31:52	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:21:35.124    06:31:52	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:21:35.124    06:31:52	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:21:35.124   06:31:52	-- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64
00:21:35.124   06:31:52	-- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:21:35.124   06:31:52	-- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:21:35.124   06:31:52	-- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:21:35.124   06:31:52	-- host/failover.sh@18 -- # nvmftestinit
00:21:35.124   06:31:52	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:21:35.124   06:31:52	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:21:35.124   06:31:52	-- nvmf/common.sh@436 -- # prepare_net_devs
00:21:35.124   06:31:52	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:21:35.124   06:31:52	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:21:35.124   06:31:52	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:21:35.124   06:31:52	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:21:35.124    06:31:52	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:21:35.383   06:31:52	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:21:35.383   06:31:52	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:21:35.383   06:31:52	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:21:35.383   06:31:52	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:21:35.383   06:31:52	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:21:35.383   06:31:52	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:21:35.383   06:31:52	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:21:35.383   06:31:52	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:21:35.383   06:31:52	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:21:35.383   06:31:52	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:21:35.383   06:31:52	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:21:35.383   06:31:52	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:21:35.383   06:31:52	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:21:35.383   06:31:52	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:21:35.383   06:31:52	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:21:35.383   06:31:52	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:21:35.383   06:31:52	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:21:35.383   06:31:52	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:21:35.383   06:31:52	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:21:35.383   06:31:52	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:21:35.383  Cannot find device "nvmf_tgt_br"
00:21:35.383   06:31:52	-- nvmf/common.sh@154 -- # true
00:21:35.383   06:31:52	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:21:35.383  Cannot find device "nvmf_tgt_br2"
00:21:35.383   06:31:52	-- nvmf/common.sh@155 -- # true
00:21:35.383   06:31:52	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:21:35.383   06:31:52	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:21:35.383  Cannot find device "nvmf_tgt_br"
00:21:35.383   06:31:52	-- nvmf/common.sh@157 -- # true
00:21:35.383   06:31:52	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:21:35.383  Cannot find device "nvmf_tgt_br2"
00:21:35.383   06:31:52	-- nvmf/common.sh@158 -- # true
00:21:35.383   06:31:52	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:21:35.383   06:31:52	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:21:35.383   06:31:52	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:21:35.383  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:21:35.383   06:31:52	-- nvmf/common.sh@161 -- # true
00:21:35.383   06:31:52	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:21:35.383  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:21:35.383   06:31:52	-- nvmf/common.sh@162 -- # true
00:21:35.383   06:31:52	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:21:35.383   06:31:52	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:21:35.383   06:31:52	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:21:35.383   06:31:52	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:21:35.383   06:31:52	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:21:35.383   06:31:52	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:21:35.383   06:31:52	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:21:35.383   06:31:52	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:21:35.383   06:31:52	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:21:35.383   06:31:52	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:21:35.383   06:31:52	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:21:35.383   06:31:52	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:21:35.383   06:31:52	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:21:35.383   06:31:52	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:21:35.383   06:31:52	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:21:35.383   06:31:52	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:21:35.383   06:31:52	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:21:35.383   06:31:52	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:21:35.383   06:31:52	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:21:35.641   06:31:52	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:21:35.641   06:31:52	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:21:35.641   06:31:52	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:21:35.641   06:31:52	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:21:35.641   06:31:52	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:21:35.641  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:21:35.641  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms
00:21:35.641  
00:21:35.641  --- 10.0.0.2 ping statistics ---
00:21:35.641  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:35.641  rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms
00:21:35.641   06:31:52	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:21:35.641  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:21:35.641  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms
00:21:35.641  
00:21:35.641  --- 10.0.0.3 ping statistics ---
00:21:35.641  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:35.641  rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms
00:21:35.641   06:31:52	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:21:35.641  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:21:35.641  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms
00:21:35.641  
00:21:35.641  --- 10.0.0.1 ping statistics ---
00:21:35.641  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:35.641  rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms
00:21:35.641   06:31:52	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:21:35.641   06:31:52	-- nvmf/common.sh@421 -- # return 0
00:21:35.641   06:31:52	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:21:35.642   06:31:52	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:21:35.642   06:31:52	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:21:35.642   06:31:52	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:21:35.642   06:31:52	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:21:35.642   06:31:52	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:21:35.642   06:31:52	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:21:35.642   06:31:52	-- host/failover.sh@20 -- # nvmfappstart -m 0xE
00:21:35.642   06:31:52	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:21:35.642   06:31:52	-- common/autotest_common.sh@722 -- # xtrace_disable
00:21:35.642   06:31:52	-- common/autotest_common.sh@10 -- # set +x
00:21:35.642   06:31:52	-- nvmf/common.sh@469 -- # nvmfpid=84660
00:21:35.642   06:31:52	-- nvmf/common.sh@470 -- # waitforlisten 84660
00:21:35.642   06:31:52	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:21:35.642   06:31:52	-- common/autotest_common.sh@829 -- # '[' -z 84660 ']'
00:21:35.642   06:31:52	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:21:35.642   06:31:52	-- common/autotest_common.sh@834 -- # local max_retries=100
00:21:35.642   06:31:52	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:21:35.642  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:21:35.642   06:31:52	-- common/autotest_common.sh@838 -- # xtrace_disable
00:21:35.642   06:31:52	-- common/autotest_common.sh@10 -- # set +x
00:21:35.642  [2024-12-16 06:31:52.507358] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:21:35.642  [2024-12-16 06:31:52.507445] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:21:35.901  [2024-12-16 06:31:52.650752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:21:35.901  [2024-12-16 06:31:52.745887] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:21:35.901  [2024-12-16 06:31:52.746064] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:21:35.901  [2024-12-16 06:31:52.746081] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:21:35.901  [2024-12-16 06:31:52.746092] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:21:35.901  [2024-12-16 06:31:52.746267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:21:35.901  [2024-12-16 06:31:52.746884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:21:35.901  [2024-12-16 06:31:52.747038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:21:36.468   06:31:53	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:21:36.468   06:31:53	-- common/autotest_common.sh@862 -- # return 0
00:21:36.468   06:31:53	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:21:36.468   06:31:53	-- common/autotest_common.sh@728 -- # xtrace_disable
00:21:36.468   06:31:53	-- common/autotest_common.sh@10 -- # set +x
00:21:36.727   06:31:53	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:21:36.727   06:31:53	-- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:21:36.985  [2024-12-16 06:31:53.765760] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:21:36.985   06:31:53	-- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0
00:21:37.244  Malloc0
00:21:37.245   06:31:54	-- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:21:37.503   06:31:54	-- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:21:37.762   06:31:54	-- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:21:38.021  [2024-12-16 06:31:54.776222] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:21:38.021   06:31:54	-- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421
00:21:38.021  [2024-12-16 06:31:54.976480] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 ***
00:21:38.021   06:31:54	-- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422
00:21:38.286  [2024-12-16 06:31:55.172785] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 ***
00:21:38.286   06:31:55	-- host/failover.sh@31 -- # bdevperf_pid=84773
00:21:38.286   06:31:55	-- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:21:38.286   06:31:55	-- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f
00:21:38.286   06:31:55	-- host/failover.sh@34 -- # waitforlisten 84773 /var/tmp/bdevperf.sock
00:21:38.286   06:31:55	-- common/autotest_common.sh@829 -- # '[' -z 84773 ']'
00:21:38.286   06:31:55	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:21:38.286   06:31:55	-- common/autotest_common.sh@834 -- # local max_retries=100
00:21:38.286  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:21:38.286   06:31:55	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:21:38.286   06:31:55	-- common/autotest_common.sh@838 -- # xtrace_disable
00:21:38.286   06:31:55	-- common/autotest_common.sh@10 -- # set +x
00:21:39.668   06:31:56	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:21:39.668   06:31:56	-- common/autotest_common.sh@862 -- # return 0
00:21:39.668   06:31:56	-- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:21:39.668  NVMe0n1
00:21:39.669   06:31:56	-- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:21:39.927  
00:21:39.927   06:31:56	-- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:21:39.927   06:31:56	-- host/failover.sh@39 -- # run_test_pid=84815
00:21:39.927   06:31:56	-- host/failover.sh@41 -- # sleep 1
00:21:40.863   06:31:57	-- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:21:41.123  [2024-12-16 06:31:58.020624] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020676] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020686] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020695] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020705] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020714] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020721] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020729] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020737] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020745] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020763] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020771] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020778] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020793] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020800] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020807] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020814] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020822] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020829] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020840] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020848] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020870] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020877] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020919] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020926] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020932] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020938] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020944] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020952] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020958] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020967] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020974] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020989] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.020996] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.021003] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.021017] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.021024] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.021031] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.021038] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.021045] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.021052] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.021059] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.021066] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.021073] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.021080] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.021087] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.123  [2024-12-16 06:31:58.021094] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021100] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021107] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021114] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021121] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021128] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021135] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021142] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021149] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021156] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021163] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021169] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021176] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021182] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021188] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021196] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021203] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021210] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021218] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021226] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021232] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021244] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021260] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021268] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021275] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021282] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021289] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021296] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021303] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021310] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124  [2024-12-16 06:31:58.021317] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5b0 is same with the state(5) to be set
00:21:41.124   06:31:58	-- host/failover.sh@45 -- # sleep 3
00:21:44.411   06:32:01	-- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:21:44.411  
00:21:44.411   06:32:01	-- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421
00:21:44.669  [2024-12-16 06:32:01.573189] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.669  [2024-12-16 06:32:01.573251] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.669  [2024-12-16 06:32:01.573278] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.669  [2024-12-16 06:32:01.573285] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.669  [2024-12-16 06:32:01.573293] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.669  [2024-12-16 06:32:01.573300] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.669  [2024-12-16 06:32:01.573308] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.669  [2024-12-16 06:32:01.573315] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.669  [2024-12-16 06:32:01.573323] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.669  [2024-12-16 06:32:01.573331] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.669  [2024-12-16 06:32:01.573338] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.669  [2024-12-16 06:32:01.573346] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.669  [2024-12-16 06:32:01.573353] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.669  [2024-12-16 06:32:01.573360] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573368] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573375] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573382] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573389] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573396] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573404] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573411] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573418] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573427] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573435] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573442] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573450] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573457] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573465] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573472] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573479] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573486] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573510] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573558] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573566] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573574] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573583] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573592] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573600] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573608] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573617] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573625] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573633] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573641] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670  [2024-12-16 06:32:01.573649] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0420 is same with the state(5) to be set
00:21:44.670   06:32:01	-- host/failover.sh@50 -- # sleep 3
00:21:47.958   06:32:04	-- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:21:47.958  [2024-12-16 06:32:04.837290] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:21:47.958   06:32:04	-- host/failover.sh@55 -- # sleep 1
00:21:48.895   06:32:05	-- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422
00:21:49.154  [2024-12-16 06:32:06.110026] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.154  [2024-12-16 06:32:06.110084] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.154  [2024-12-16 06:32:06.110111] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.154  [2024-12-16 06:32:06.110119] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.154  [2024-12-16 06:32:06.110127] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.154  [2024-12-16 06:32:06.110135] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.154  [2024-12-16 06:32:06.110142] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.154  [2024-12-16 06:32:06.110149] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.154  [2024-12-16 06:32:06.110157] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.154  [2024-12-16 06:32:06.110165] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.154  [2024-12-16 06:32:06.110172] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.154  [2024-12-16 06:32:06.110180] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.154  [2024-12-16 06:32:06.110187] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.154  [2024-12-16 06:32:06.110210] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.154  [2024-12-16 06:32:06.110218] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.154  [2024-12-16 06:32:06.110225] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.154  [2024-12-16 06:32:06.110232] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.154  [2024-12-16 06:32:06.110250] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.155  [2024-12-16 06:32:06.110258] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.155  [2024-12-16 06:32:06.110266] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.155  [2024-12-16 06:32:06.110273] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.155  [2024-12-16 06:32:06.110281] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.155  [2024-12-16 06:32:06.110288] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.155  [2024-12-16 06:32:06.110296] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.155  [2024-12-16 06:32:06.110303] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.155  [2024-12-16 06:32:06.110311] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.155  [2024-12-16 06:32:06.110318] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.155  [2024-12-16 06:32:06.110325] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.155  [2024-12-16 06:32:06.110333] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.155  [2024-12-16 06:32:06.110340] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.155  [2024-12-16 06:32:06.110347] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a0fb0 is same with the state(5) to be set
00:21:49.413   06:32:06	-- host/failover.sh@59 -- # wait 84815
00:21:56.052  0
00:21:56.052   06:32:11	-- host/failover.sh@61 -- # killprocess 84773
00:21:56.052   06:32:11	-- common/autotest_common.sh@936 -- # '[' -z 84773 ']'
00:21:56.052   06:32:11	-- common/autotest_common.sh@940 -- # kill -0 84773
00:21:56.052    06:32:11	-- common/autotest_common.sh@941 -- # uname
00:21:56.052   06:32:11	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:21:56.052    06:32:11	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84773
00:21:56.052  killing process with pid 84773
00:21:56.052   06:32:11	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:21:56.052   06:32:11	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:21:56.052   06:32:11	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 84773'
00:21:56.052   06:32:11	-- common/autotest_common.sh@955 -- # kill 84773
00:21:56.052   06:32:11	-- common/autotest_common.sh@960 -- # wait 84773
00:21:56.052   06:32:12	-- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt
00:21:56.052  [2024-12-16 06:31:55.246483] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:21:56.052  [2024-12-16 06:31:55.246611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84773 ]
00:21:56.052  [2024-12-16 06:31:55.380664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:21:56.052  [2024-12-16 06:31:55.461437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:21:56.052  Running I/O for 15 seconds...
00:21:56.052  [2024-12-16 06:31:58.021424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:56.052  [2024-12-16 06:31:58.021474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.021506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:56.052  [2024-12-16 06:31:58.021536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.021551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:56.052  [2024-12-16 06:31:58.021564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.021594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:56.052  [2024-12-16 06:31:58.021612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.021626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5440 is same with the state(5) to be set
00:21:56.052  [2024-12-16 06:31:58.021701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.021723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.021746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.021761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.021776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.021790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.021805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.021818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.021849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.021878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.021907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.021920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.021934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.021967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.021982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.021995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.052  [2024-12-16 06:31:58.022927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.052  [2024-12-16 06:31:58.022939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.022953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.053  [2024-12-16 06:31:58.022965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.022979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.022991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.023017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.053  [2024-12-16 06:31:58.023043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.023068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.023094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.053  [2024-12-16 06:31:58.023119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.053  [2024-12-16 06:31:58.023150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.053  [2024-12-16 06:31:58.023176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.023202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.053  [2024-12-16 06:31:58.023228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.053  [2024-12-16 06:31:58.023253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.053  [2024-12-16 06:31:58.023279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.023304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.023330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.053  [2024-12-16 06:31:58.023355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.053  [2024-12-16 06:31:58.023380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.023407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.023432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.023457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.023506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.023564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.023608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.053  [2024-12-16 06:31:58.023637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.053  [2024-12-16 06:31:58.023665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.023694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.023721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.053  [2024-12-16 06:31:58.023749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.053  [2024-12-16 06:31:58.023786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.053  [2024-12-16 06:31:58.023814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.023858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.023900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.023951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.023977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.023990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.024004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.024016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.024029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.024042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.024055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.024067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.024080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.024092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.024106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.024118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.024131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.053  [2024-12-16 06:31:58.024144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.024157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.053  [2024-12-16 06:31:58.024169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.024182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.024194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.024208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.024220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.024233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.053  [2024-12-16 06:31:58.024245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.024259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.024271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.024285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.053  [2024-12-16 06:31:58.024303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.024317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.024330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.024343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.053  [2024-12-16 06:31:58.024355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.053  [2024-12-16 06:31:58.024368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.024380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.024398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.024411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.024424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.024436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.024450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.024461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.024475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.024487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.024501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.024529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.024542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.024564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.024581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.054  [2024-12-16 06:31:58.024594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.024608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.054  [2024-12-16 06:31:58.024620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.024634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.054  [2024-12-16 06:31:58.024647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.024667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.054  [2024-12-16 06:31:58.024680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.024695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.054  [2024-12-16 06:31:58.024708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.024721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.024734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.024748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.054  [2024-12-16 06:31:58.024760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.024775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.054  [2024-12-16 06:31:58.024788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.024801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.024814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.024828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.024840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.024874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.024886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.024900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.054  [2024-12-16 06:31:58.024912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.024926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.024938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.024951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.024963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.024976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.024989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.025003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.025015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.025033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.025046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.025060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.025072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.025085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.054  [2024-12-16 06:31:58.025097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.025111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.025124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.025137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.054  [2024-12-16 06:31:58.025150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.025163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.054  [2024-12-16 06:31:58.025175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.025188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.025201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.025214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.054  [2024-12-16 06:31:58.025226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.025239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.054  [2024-12-16 06:31:58.025256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.025270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.025283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.025301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.054  [2024-12-16 06:31:58.025313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.025327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.025339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.025353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.054  [2024-12-16 06:31:58.025370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.025384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.025397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.025410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.025422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.025436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.025448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.025461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.025473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.025487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.025507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.025523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.054  [2024-12-16 06:31:58.025535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.025547] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91a9a0 is same with the state(5) to be set
00:21:56.054  [2024-12-16 06:31:58.025562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:21:56.054  [2024-12-16 06:31:58.025572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:21:56.054  [2024-12-16 06:31:58.025581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6528 len:8 PRP1 0x0 PRP2 0x0
00:21:56.054  [2024-12-16 06:31:58.025593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:31:58.025646] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x91a9a0 was disconnected and freed. reset controller.
00:21:56.054  [2024-12-16 06:31:58.025680] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421
00:21:56.054  [2024-12-16 06:31:58.025694] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state.
00:21:56.054  [2024-12-16 06:31:58.028131] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller
00:21:56.054  [2024-12-16 06:31:58.028166] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a5440 (9): Bad file descriptor
00:21:56.054  [2024-12-16 06:31:58.063839] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:21:56.054  [2024-12-16 06:32:01.573347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:56.054  [2024-12-16 06:32:01.573395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:32:01.573412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:56.054  [2024-12-16 06:32:01.573442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.054  [2024-12-16 06:32:01.573456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:56.054  [2024-12-16 06:32:01.573467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.573480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:56.055  [2024-12-16 06:32:01.573491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.573520] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5440 is same with the state(5) to be set
00:21:56.055  [2024-12-16 06:32:01.573738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.573762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.573794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.573809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.573823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.573852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.573901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.573913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.573927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:43568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.573939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.573952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.573964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.573977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.573989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.574014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.574039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.574068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.574100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.574126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.574154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:43024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.574181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:43032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.574207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.574233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:43656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.574259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.574285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:43688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.574310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:43696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.574335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.055  [2024-12-16 06:32:01.574361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.055  [2024-12-16 06:32:01.574387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.574452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.574482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.055  [2024-12-16 06:32:01.574523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.574552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.574580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.574608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:43136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.574637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:43176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.574665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:43184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.574694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:43192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.574722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.574750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.055  [2024-12-16 06:32:01.574778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.055  [2024-12-16 06:32:01.574848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.055  [2024-12-16 06:32:01.574909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:43784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.055  [2024-12-16 06:32:01.574934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.055  [2024-12-16 06:32:01.574959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:43800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.055  [2024-12-16 06:32:01.574984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.574998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.055  [2024-12-16 06:32:01.575010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.575023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.055  [2024-12-16 06:32:01.575034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.055  [2024-12-16 06:32:01.575047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.056  [2024-12-16 06:32:01.575059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.575083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.575108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.575133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:43856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.056  [2024-12-16 06:32:01.575157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:43864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.575181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.056  [2024-12-16 06:32:01.575213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:43216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.575239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:43224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.575264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.575290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:43248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.575314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:43264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.575339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:43280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.575363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:43288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.575388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:43304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.575413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.575437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.056  [2024-12-16 06:32:01.575462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.056  [2024-12-16 06:32:01.575487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.575545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.575589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.575618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.056  [2024-12-16 06:32:01.575645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.575672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.056  [2024-12-16 06:32:01.575700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.056  [2024-12-16 06:32:01.575728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:43960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.575755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.575782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:43976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.575810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:43352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.575836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.575864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.575931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.575961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.575975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:43440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.575987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.576000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.576012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.576025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.576036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.576049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:43488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.576061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.576074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.056  [2024-12-16 06:32:01.576086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.576099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:43992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.056  [2024-12-16 06:32:01.576111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.576124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.056  [2024-12-16 06:32:01.576136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.576149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.576161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.576174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.056  [2024-12-16 06:32:01.576186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.576199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.576211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.576224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.576236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.576249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.576261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.576274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.056  [2024-12-16 06:32:01.576291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.576304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.576316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.576330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.576341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.576355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.576366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.576379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.576391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.056  [2024-12-16 06:32:01.576404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.056  [2024-12-16 06:32:01.576415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.576428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.057  [2024-12-16 06:32:01.576440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.576453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.576464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.576477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.057  [2024-12-16 06:32:01.576489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.576502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.576538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.576554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:43504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.576567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.576580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.576598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.576612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:43528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.576625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.576649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:43552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.576663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.576676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:43576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.576688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.576702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.576714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.576727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:43592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.576739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.576753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.576765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.576778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.576790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.576804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.057  [2024-12-16 06:32:01.576816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.576829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.576841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.576855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.057  [2024-12-16 06:32:01.576882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.576895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.576907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.576920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.576931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.576944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.576955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.576968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.057  [2024-12-16 06:32:01.576985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.576999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.577010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.577023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.057  [2024-12-16 06:32:01.577040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.577053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.057  [2024-12-16 06:32:01.577065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.577083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.577095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.577108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.057  [2024-12-16 06:32:01.577120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.577133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:44232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.577145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.577158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.577170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.577183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.577194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.577207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.057  [2024-12-16 06:32:01.577219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.577232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.577244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.577257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.577268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.577281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.577293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.577315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:43648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.577327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.577340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.577352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.577365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:43680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.577377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.577390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:43704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:01.577402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.577414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89dae0 is same with the state(5) to be set
00:21:56.057  [2024-12-16 06:32:01.577428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:21:56.057  [2024-12-16 06:32:01.577438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:21:56.057  [2024-12-16 06:32:01.577452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43712 len:8 PRP1 0x0 PRP2 0x0
00:21:56.057  [2024-12-16 06:32:01.577464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:01.577526] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x89dae0 was disconnected and freed. reset controller.
00:21:56.057  [2024-12-16 06:32:01.577550] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422
00:21:56.057  [2024-12-16 06:32:01.577563] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state.
00:21:56.057  [2024-12-16 06:32:01.580022] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller
00:21:56.057  [2024-12-16 06:32:01.580059] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a5440 (9): Bad file descriptor
00:21:56.057  [2024-12-16 06:32:01.604953] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:21:56.057  [2024-12-16 06:32:06.110499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:06.110591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:06.110620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:06.110637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:06.110654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:06.110668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:06.110683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:06.110698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:06.110731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:06.110747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:06.110762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:06.110790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:06.110805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:06.110818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:06.110847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.057  [2024-12-16 06:32:06.110860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.057  [2024-12-16 06:32:06.110874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.110894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.110908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.110923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.110937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.110950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.110979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.110991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:66544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.058  [2024-12-16 06:32:06.111629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.058  [2024-12-16 06:32:06.111738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:66608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.058  [2024-12-16 06:32:06.111818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.111947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.058  [2024-12-16 06:32:06.111974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.111988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:66672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.112001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.112015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.058  [2024-12-16 06:32:06.112042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.112055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.058  [2024-12-16 06:32:06.112067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.112080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.112092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.112105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.112117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.112130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.112142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.112156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.112167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.112181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.058  [2024-12-16 06:32:06.112192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.112206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.112217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.112231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.058  [2024-12-16 06:32:06.112248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.112262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.058  [2024-12-16 06:32:06.112274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.112288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.112300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.112313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.112325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.058  [2024-12-16 06:32:06.112338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.058  [2024-12-16 06:32:06.112350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.112364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.112376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.112389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.112402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.112416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.112429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.112442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.112454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.112468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.112479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.112493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.112505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.112527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.112540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.112554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.112566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.112586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.112599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.112612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.112624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.112638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.112650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.112664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.112676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.112689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.112702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.112715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.059  [2024-12-16 06:32:06.112728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.112741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.059  [2024-12-16 06:32:06.112753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.112766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.059  [2024-12-16 06:32:06.112777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.112791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.112803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.112816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.059  [2024-12-16 06:32:06.112828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.112842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.112854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.112867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.059  [2024-12-16 06:32:06.112879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.112892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.112904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.112924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.112937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.112950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.112963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.112976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.059  [2024-12-16 06:32:06.112988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.113001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.113013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.113026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.113038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.113052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:66864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.113064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.113077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.059  [2024-12-16 06:32:06.113089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.113103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.059  [2024-12-16 06:32:06.113115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.113128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.113140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.113153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.059  [2024-12-16 06:32:06.113165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.113178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.059  [2024-12-16 06:32:06.113190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.113210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.059  [2024-12-16 06:32:06.113223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.113236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.059  [2024-12-16 06:32:06.113254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.113268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.113281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.113293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.059  [2024-12-16 06:32:06.113305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.113319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.059  [2024-12-16 06:32:06.113331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.113344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.059  [2024-12-16 06:32:06.113355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.113368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.113380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.113393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.059  [2024-12-16 06:32:06.113405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.113419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.113431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.113444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.113456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.113469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.059  [2024-12-16 06:32:06.113481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.113507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:67000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.059  [2024-12-16 06:32:06.113520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.059  [2024-12-16 06:32:06.113533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:21:56.060  [2024-12-16 06:32:06.113546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.113559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.060  [2024-12-16 06:32:06.113571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.113591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.060  [2024-12-16 06:32:06.113604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.113617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.060  [2024-12-16 06:32:06.113629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.113647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.060  [2024-12-16 06:32:06.113659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.113673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.060  [2024-12-16 06:32:06.113685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.113698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.060  [2024-12-16 06:32:06.113710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.113723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.060  [2024-12-16 06:32:06.113735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.113748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.060  [2024-12-16 06:32:06.113760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.113774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.060  [2024-12-16 06:32:06.113785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.113798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.060  [2024-12-16 06:32:06.113811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.113824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.060  [2024-12-16 06:32:06.113836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.113850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.060  [2024-12-16 06:32:06.113861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.113875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.060  [2024-12-16 06:32:06.113886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.113900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.060  [2024-12-16 06:32:06.113917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.113931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.060  [2024-12-16 06:32:06.113943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.113957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.060  [2024-12-16 06:32:06.113969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.113983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.060  [2024-12-16 06:32:06.113995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.114008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.060  [2024-12-16 06:32:06.114020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.114033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.060  [2024-12-16 06:32:06.114045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.114059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.060  [2024-12-16 06:32:06.114070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.114083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.060  [2024-12-16 06:32:06.114095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.114108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.060  [2024-12-16 06:32:06.114120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.114133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:21:56.060  [2024-12-16 06:32:06.114145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.114157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x917cd0 is same with the state(5) to be set
00:21:56.060  [2024-12-16 06:32:06.114172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:21:56.060  [2024-12-16 06:32:06.114181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:21:56.060  [2024-12-16 06:32:06.114191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66560 len:8 PRP1 0x0 PRP2 0x0
00:21:56.060  [2024-12-16 06:32:06.114202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.114256] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x917cd0 was disconnected and freed. reset controller.
00:21:56.060  [2024-12-16 06:32:06.114274] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420
00:21:56.060  [2024-12-16 06:32:06.114333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:56.060  [2024-12-16 06:32:06.114352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.114366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:56.060  [2024-12-16 06:32:06.114377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.114389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:56.060  [2024-12-16 06:32:06.114400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.114412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:56.060  [2024-12-16 06:32:06.114465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:56.060  [2024-12-16 06:32:06.114478] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state.
00:21:56.060  [2024-12-16 06:32:06.114533] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a5440 (9): Bad file descriptor
00:21:56.060  [2024-12-16 06:32:06.116704] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller
00:21:56.060  [2024-12-16 06:32:06.144515] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:21:56.060  
00:21:56.060                                                                                                  Latency(us)
00:21:56.060  
[2024-12-16T06:32:13.036Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:21:56.060  
[2024-12-16T06:32:13.036Z]  Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:21:56.060  	 Verification LBA range: start 0x0 length 0x4000
00:21:56.060  	 NVMe0n1             :      15.01   15061.52      58.83     314.33     0.00    8309.74     718.66   14894.55
00:21:56.060  
[2024-12-16T06:32:13.036Z]  ===================================================================================================================
00:21:56.060  
[2024-12-16T06:32:13.036Z]  Total                       :              15061.52      58.83     314.33     0.00    8309.74     718.66   14894.55
00:21:56.060  Received shutdown signal, test time was about 15.000000 seconds
00:21:56.060  
00:21:56.060                                                                                                  Latency(us)
00:21:56.060  
[2024-12-16T06:32:13.036Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:21:56.060  
[2024-12-16T06:32:13.036Z]  ===================================================================================================================
00:21:56.060  
[2024-12-16T06:32:13.036Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:21:56.060    06:32:12	-- host/failover.sh@65 -- # grep -c 'Resetting controller successful'
00:21:56.060   06:32:12	-- host/failover.sh@65 -- # count=3
00:21:56.060   06:32:12	-- host/failover.sh@67 -- # (( count != 3 ))
00:21:56.060   06:32:12	-- host/failover.sh@73 -- # bdevperf_pid=85026
00:21:56.060   06:32:12	-- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f
00:21:56.060   06:32:12	-- host/failover.sh@75 -- # waitforlisten 85026 /var/tmp/bdevperf.sock
00:21:56.060   06:32:12	-- common/autotest_common.sh@829 -- # '[' -z 85026 ']'
00:21:56.060   06:32:12	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:21:56.060   06:32:12	-- common/autotest_common.sh@834 -- # local max_retries=100
00:21:56.060   06:32:12	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:21:56.060  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:21:56.060   06:32:12	-- common/autotest_common.sh@838 -- # xtrace_disable
00:21:56.060   06:32:12	-- common/autotest_common.sh@10 -- # set +x
00:21:56.321   06:32:13	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:21:56.321   06:32:13	-- common/autotest_common.sh@862 -- # return 0
00:21:56.321   06:32:13	-- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421
00:21:56.580  [2024-12-16 06:32:13.452274] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 ***
00:21:56.580   06:32:13	-- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422
00:21:56.840  [2024-12-16 06:32:13.660398] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 ***
00:21:56.840   06:32:13	-- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:21:57.099  NVMe0n1
00:21:57.099   06:32:13	-- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:21:57.358  
00:21:57.358   06:32:14	-- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:21:57.618  
00:21:57.618   06:32:14	-- host/failover.sh@82 -- # grep -q NVMe0
00:21:57.618   06:32:14	-- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:21:57.877   06:32:14	-- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:21:58.136   06:32:15	-- host/failover.sh@87 -- # sleep 3
00:22:01.425   06:32:18	-- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:22:01.425   06:32:18	-- host/failover.sh@88 -- # grep -q NVMe0
00:22:01.425   06:32:18	-- host/failover.sh@90 -- # run_test_pid=85169
00:22:01.425   06:32:18	-- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:22:01.425   06:32:18	-- host/failover.sh@92 -- # wait 85169
00:22:02.804  0
00:22:02.804   06:32:19	-- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt
00:22:02.804  [2024-12-16 06:32:12.245564] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:22:02.804  [2024-12-16 06:32:12.245643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85026 ]
00:22:02.804  [2024-12-16 06:32:12.373676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:02.804  [2024-12-16 06:32:12.461011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:22:02.804  [2024-12-16 06:32:15.058253] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421
00:22:02.804  [2024-12-16 06:32:15.058361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:02.804  [2024-12-16 06:32:15.058383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:02.804  [2024-12-16 06:32:15.058399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:02.804  [2024-12-16 06:32:15.058411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:02.804  [2024-12-16 06:32:15.058431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:02.804  [2024-12-16 06:32:15.058474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:02.804  [2024-12-16 06:32:15.058488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:02.804  [2024-12-16 06:32:15.058500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:02.804  [2024-12-16 06:32:15.058525] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state.
00:22:02.804  [2024-12-16 06:32:15.058569] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller
00:22:02.804  [2024-12-16 06:32:15.058598] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2496440 (9): Bad file descriptor
00:22:02.804  [2024-12-16 06:32:15.065588] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:22:02.804  Running I/O for 1 seconds...
00:22:02.804  
00:22:02.804                                                                                                  Latency(us)
00:22:02.804  
[2024-12-16T06:32:19.780Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:02.804  
[2024-12-16T06:32:19.780Z]  Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:22:02.804  	 Verification LBA range: start 0x0 length 0x4000
00:22:02.804  	 NVMe0n1             :       1.01   15545.39      60.72       0.00     0.00    8199.76    1325.61    9115.46
00:22:02.804  
[2024-12-16T06:32:19.780Z]  ===================================================================================================================
00:22:02.804  
[2024-12-16T06:32:19.780Z]  Total                       :              15545.39      60.72       0.00     0.00    8199.76    1325.61    9115.46
00:22:02.804   06:32:19	-- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:22:02.804   06:32:19	-- host/failover.sh@95 -- # grep -q NVMe0
00:22:02.804   06:32:19	-- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:22:03.063   06:32:20	-- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:22:03.063   06:32:20	-- host/failover.sh@99 -- # grep -q NVMe0
00:22:03.323   06:32:20	-- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:22:03.582   06:32:20	-- host/failover.sh@101 -- # sleep 3
00:22:06.873   06:32:23	-- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:22:06.873   06:32:23	-- host/failover.sh@103 -- # grep -q NVMe0
00:22:06.873   06:32:23	-- host/failover.sh@108 -- # killprocess 85026
00:22:06.873   06:32:23	-- common/autotest_common.sh@936 -- # '[' -z 85026 ']'
00:22:06.873   06:32:23	-- common/autotest_common.sh@940 -- # kill -0 85026
00:22:06.873    06:32:23	-- common/autotest_common.sh@941 -- # uname
00:22:06.873   06:32:23	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:22:06.873    06:32:23	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85026
00:22:06.873   06:32:23	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:22:06.873   06:32:23	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:22:06.873  killing process with pid 85026
00:22:06.873   06:32:23	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 85026'
00:22:06.873   06:32:23	-- common/autotest_common.sh@955 -- # kill 85026
00:22:06.873   06:32:23	-- common/autotest_common.sh@960 -- # wait 85026
00:22:07.135   06:32:23	-- host/failover.sh@110 -- # sync
00:22:07.135   06:32:24	-- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:22:07.395   06:32:24	-- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT
00:22:07.395   06:32:24	-- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt
00:22:07.395   06:32:24	-- host/failover.sh@116 -- # nvmftestfini
00:22:07.395   06:32:24	-- nvmf/common.sh@476 -- # nvmfcleanup
00:22:07.395   06:32:24	-- nvmf/common.sh@116 -- # sync
00:22:07.395   06:32:24	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:22:07.395   06:32:24	-- nvmf/common.sh@119 -- # set +e
00:22:07.395   06:32:24	-- nvmf/common.sh@120 -- # for i in {1..20}
00:22:07.395   06:32:24	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:22:07.395  rmmod nvme_tcp
00:22:07.395  rmmod nvme_fabrics
00:22:07.395  rmmod nvme_keyring
00:22:07.395   06:32:24	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:22:07.395   06:32:24	-- nvmf/common.sh@123 -- # set -e
00:22:07.395   06:32:24	-- nvmf/common.sh@124 -- # return 0
00:22:07.395   06:32:24	-- nvmf/common.sh@477 -- # '[' -n 84660 ']'
00:22:07.395   06:32:24	-- nvmf/common.sh@478 -- # killprocess 84660
00:22:07.395   06:32:24	-- common/autotest_common.sh@936 -- # '[' -z 84660 ']'
00:22:07.395   06:32:24	-- common/autotest_common.sh@940 -- # kill -0 84660
00:22:07.395    06:32:24	-- common/autotest_common.sh@941 -- # uname
00:22:07.395   06:32:24	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:22:07.395    06:32:24	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84660
00:22:07.395   06:32:24	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:22:07.395   06:32:24	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:22:07.395  killing process with pid 84660
00:22:07.395   06:32:24	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 84660'
00:22:07.395   06:32:24	-- common/autotest_common.sh@955 -- # kill 84660
00:22:07.395   06:32:24	-- common/autotest_common.sh@960 -- # wait 84660
00:22:07.655   06:32:24	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:22:07.655   06:32:24	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:22:07.655   06:32:24	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:22:07.655   06:32:24	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:22:07.655   06:32:24	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:22:07.655   06:32:24	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:22:07.655   06:32:24	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:22:07.655    06:32:24	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:22:07.655   06:32:24	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:22:07.915  
00:22:07.915  real	0m32.738s
00:22:07.915  user	2m6.506s
00:22:07.915  sys	0m4.874s
00:22:07.915   06:32:24	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:22:07.915   06:32:24	-- common/autotest_common.sh@10 -- # set +x
00:22:07.915  ************************************
00:22:07.915  END TEST nvmf_failover
00:22:07.915  ************************************
00:22:07.915   06:32:24	-- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp
00:22:07.915   06:32:24	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:22:07.915   06:32:24	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:22:07.915   06:32:24	-- common/autotest_common.sh@10 -- # set +x
00:22:07.915  ************************************
00:22:07.915  START TEST nvmf_discovery
00:22:07.915  ************************************
00:22:07.915   06:32:24	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp
00:22:07.915  * Looking for test storage...
00:22:07.915  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:22:07.915    06:32:24	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:22:07.915     06:32:24	-- common/autotest_common.sh@1690 -- # lcov --version
00:22:07.915     06:32:24	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:22:07.915    06:32:24	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:22:07.915    06:32:24	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:22:07.915    06:32:24	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:22:07.915    06:32:24	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:22:07.915    06:32:24	-- scripts/common.sh@335 -- # IFS=.-:
00:22:07.915    06:32:24	-- scripts/common.sh@335 -- # read -ra ver1
00:22:07.915    06:32:24	-- scripts/common.sh@336 -- # IFS=.-:
00:22:07.915    06:32:24	-- scripts/common.sh@336 -- # read -ra ver2
00:22:07.915    06:32:24	-- scripts/common.sh@337 -- # local 'op=<'
00:22:07.915    06:32:24	-- scripts/common.sh@339 -- # ver1_l=2
00:22:07.915    06:32:24	-- scripts/common.sh@340 -- # ver2_l=1
00:22:07.915    06:32:24	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:22:07.915    06:32:24	-- scripts/common.sh@343 -- # case "$op" in
00:22:07.915    06:32:24	-- scripts/common.sh@344 -- # : 1
00:22:07.915    06:32:24	-- scripts/common.sh@363 -- # (( v = 0 ))
00:22:07.915    06:32:24	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:22:07.915     06:32:24	-- scripts/common.sh@364 -- # decimal 1
00:22:07.915     06:32:24	-- scripts/common.sh@352 -- # local d=1
00:22:07.915     06:32:24	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:22:07.915     06:32:24	-- scripts/common.sh@354 -- # echo 1
00:22:07.915    06:32:24	-- scripts/common.sh@364 -- # ver1[v]=1
00:22:07.915     06:32:24	-- scripts/common.sh@365 -- # decimal 2
00:22:07.915     06:32:24	-- scripts/common.sh@352 -- # local d=2
00:22:07.915     06:32:24	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:22:07.915     06:32:24	-- scripts/common.sh@354 -- # echo 2
00:22:07.915    06:32:24	-- scripts/common.sh@365 -- # ver2[v]=2
00:22:07.915    06:32:24	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:22:07.915    06:32:24	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:22:07.915    06:32:24	-- scripts/common.sh@367 -- # return 0
00:22:07.915    06:32:24	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:22:07.915    06:32:24	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:22:07.915  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:07.915  		--rc genhtml_branch_coverage=1
00:22:07.915  		--rc genhtml_function_coverage=1
00:22:07.915  		--rc genhtml_legend=1
00:22:07.915  		--rc geninfo_all_blocks=1
00:22:07.915  		--rc geninfo_unexecuted_blocks=1
00:22:07.915  		
00:22:07.915  		'
00:22:07.915    06:32:24	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:22:07.915  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:07.915  		--rc genhtml_branch_coverage=1
00:22:07.915  		--rc genhtml_function_coverage=1
00:22:07.916  		--rc genhtml_legend=1
00:22:07.916  		--rc geninfo_all_blocks=1
00:22:07.916  		--rc geninfo_unexecuted_blocks=1
00:22:07.916  		
00:22:07.916  		'
00:22:07.916    06:32:24	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:22:07.916  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:07.916  		--rc genhtml_branch_coverage=1
00:22:07.916  		--rc genhtml_function_coverage=1
00:22:07.916  		--rc genhtml_legend=1
00:22:07.916  		--rc geninfo_all_blocks=1
00:22:07.916  		--rc geninfo_unexecuted_blocks=1
00:22:07.916  		
00:22:07.916  		'
00:22:07.916    06:32:24	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:22:07.916  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:07.916  		--rc genhtml_branch_coverage=1
00:22:07.916  		--rc genhtml_function_coverage=1
00:22:07.916  		--rc genhtml_legend=1
00:22:07.916  		--rc geninfo_all_blocks=1
00:22:07.916  		--rc geninfo_unexecuted_blocks=1
00:22:07.916  		
00:22:07.916  		'
00:22:07.916   06:32:24	-- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:22:07.916     06:32:24	-- nvmf/common.sh@7 -- # uname -s
00:22:07.916    06:32:24	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:22:07.916    06:32:24	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:22:07.916    06:32:24	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:22:07.916    06:32:24	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:22:07.916    06:32:24	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:22:07.916    06:32:24	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:22:07.916    06:32:24	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:22:07.916    06:32:24	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:22:07.916    06:32:24	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:22:07.916     06:32:24	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:22:07.916    06:32:24	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:22:07.916    06:32:24	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:22:07.916    06:32:24	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:22:07.916    06:32:24	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:22:07.916    06:32:24	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:22:07.916    06:32:24	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:22:07.916     06:32:24	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:22:07.916     06:32:24	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:22:07.916     06:32:24	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:22:07.916      06:32:24	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:07.916      06:32:24	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:07.916      06:32:24	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:07.916      06:32:24	-- paths/export.sh@5 -- # export PATH
00:22:07.916      06:32:24	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:07.916    06:32:24	-- nvmf/common.sh@46 -- # : 0
00:22:07.916    06:32:24	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:22:07.916    06:32:24	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:22:07.916    06:32:24	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:22:07.916    06:32:24	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:22:07.916    06:32:24	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:22:07.916    06:32:24	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:22:07.916    06:32:24	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:22:07.916    06:32:24	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:22:07.916   06:32:24	-- host/discovery.sh@11 -- # '[' tcp == rdma ']'
00:22:07.916   06:32:24	-- host/discovery.sh@16 -- # DISCOVERY_PORT=8009
00:22:07.916   06:32:24	-- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery
00:22:07.916   06:32:24	-- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode
00:22:07.916   06:32:24	-- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test
00:22:07.916   06:32:24	-- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock
00:22:07.916   06:32:24	-- host/discovery.sh@25 -- # nvmftestinit
00:22:07.916   06:32:24	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:22:07.916   06:32:24	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:22:07.916   06:32:24	-- nvmf/common.sh@436 -- # prepare_net_devs
00:22:07.916   06:32:24	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:22:07.916   06:32:24	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:22:07.916   06:32:24	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:22:07.916   06:32:24	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:22:07.916    06:32:24	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:22:07.916   06:32:24	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:22:07.916   06:32:24	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:22:07.916   06:32:24	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:22:07.916   06:32:24	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:22:07.916   06:32:24	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:22:07.916   06:32:24	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:22:07.916   06:32:24	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:22:07.916   06:32:24	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:22:07.916   06:32:24	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:22:07.916   06:32:24	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:22:07.916   06:32:24	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:22:07.916   06:32:24	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:22:07.916   06:32:24	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:22:07.916   06:32:24	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:22:07.916   06:32:24	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:22:07.916   06:32:24	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:22:07.916   06:32:24	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:22:07.916   06:32:24	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:22:07.916   06:32:24	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:22:07.916   06:32:24	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:22:08.176  Cannot find device "nvmf_tgt_br"
00:22:08.176   06:32:24	-- nvmf/common.sh@154 -- # true
00:22:08.176   06:32:24	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:22:08.176  Cannot find device "nvmf_tgt_br2"
00:22:08.176   06:32:24	-- nvmf/common.sh@155 -- # true
00:22:08.176   06:32:24	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:22:08.176   06:32:24	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:22:08.176  Cannot find device "nvmf_tgt_br"
00:22:08.176   06:32:24	-- nvmf/common.sh@157 -- # true
00:22:08.176   06:32:24	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:22:08.176  Cannot find device "nvmf_tgt_br2"
00:22:08.176   06:32:24	-- nvmf/common.sh@158 -- # true
00:22:08.176   06:32:24	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:22:08.176   06:32:24	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:22:08.176   06:32:24	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:22:08.176  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:22:08.176   06:32:24	-- nvmf/common.sh@161 -- # true
00:22:08.176   06:32:24	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:22:08.176  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:22:08.176   06:32:24	-- nvmf/common.sh@162 -- # true
00:22:08.176   06:32:24	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:22:08.176   06:32:25	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:22:08.176   06:32:25	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:22:08.176   06:32:25	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:22:08.176   06:32:25	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:22:08.176   06:32:25	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:22:08.176   06:32:25	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:22:08.176   06:32:25	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:22:08.176   06:32:25	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:22:08.176   06:32:25	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:22:08.176   06:32:25	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:22:08.176   06:32:25	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:22:08.176   06:32:25	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:22:08.176   06:32:25	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:22:08.176   06:32:25	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:22:08.176   06:32:25	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:22:08.176   06:32:25	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:22:08.176   06:32:25	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:22:08.176   06:32:25	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:22:08.176   06:32:25	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:22:08.176   06:32:25	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:22:08.176   06:32:25	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:22:08.176   06:32:25	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:22:08.435   06:32:25	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:22:08.435  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:22:08.435  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms
00:22:08.435  
00:22:08.435  --- 10.0.0.2 ping statistics ---
00:22:08.435  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:08.435  rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms
00:22:08.435   06:32:25	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:22:08.435  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:22:08.435  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms
00:22:08.435  
00:22:08.435  --- 10.0.0.3 ping statistics ---
00:22:08.435  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:08.435  rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms
00:22:08.436   06:32:25	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:22:08.436  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:22:08.436  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms
00:22:08.436  
00:22:08.436  --- 10.0.0.1 ping statistics ---
00:22:08.436  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:08.436  rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms
00:22:08.436   06:32:25	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:22:08.436   06:32:25	-- nvmf/common.sh@421 -- # return 0
00:22:08.436   06:32:25	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:22:08.436   06:32:25	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:22:08.436   06:32:25	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:22:08.436   06:32:25	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:22:08.436   06:32:25	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:22:08.436   06:32:25	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:22:08.436   06:32:25	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:22:08.436   06:32:25	-- host/discovery.sh@30 -- # nvmfappstart -m 0x2
00:22:08.436   06:32:25	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:22:08.436   06:32:25	-- common/autotest_common.sh@722 -- # xtrace_disable
00:22:08.436   06:32:25	-- common/autotest_common.sh@10 -- # set +x
00:22:08.436   06:32:25	-- nvmf/common.sh@469 -- # nvmfpid=85470
00:22:08.436   06:32:25	-- nvmf/common.sh@470 -- # waitforlisten 85470
00:22:08.436   06:32:25	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:22:08.436   06:32:25	-- common/autotest_common.sh@829 -- # '[' -z 85470 ']'
00:22:08.436   06:32:25	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:22:08.436   06:32:25	-- common/autotest_common.sh@834 -- # local max_retries=100
00:22:08.436  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:22:08.436   06:32:25	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:22:08.436   06:32:25	-- common/autotest_common.sh@838 -- # xtrace_disable
00:22:08.436   06:32:25	-- common/autotest_common.sh@10 -- # set +x
00:22:08.436  [2024-12-16 06:32:25.248805] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:22:08.436  [2024-12-16 06:32:25.248881] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:22:08.436  [2024-12-16 06:32:25.382699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:08.695  [2024-12-16 06:32:25.461514] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:22:08.695  [2024-12-16 06:32:25.461648] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:22:08.695  [2024-12-16 06:32:25.461660] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:22:08.695  [2024-12-16 06:32:25.461676] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:22:08.695  [2024-12-16 06:32:25.461705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:22:09.264   06:32:26	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:22:09.264   06:32:26	-- common/autotest_common.sh@862 -- # return 0
00:22:09.264   06:32:26	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:22:09.264   06:32:26	-- common/autotest_common.sh@728 -- # xtrace_disable
00:22:09.264   06:32:26	-- common/autotest_common.sh@10 -- # set +x
00:22:09.523   06:32:26	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:22:09.523   06:32:26	-- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:22:09.523   06:32:26	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:09.523   06:32:26	-- common/autotest_common.sh@10 -- # set +x
00:22:09.523  [2024-12-16 06:32:26.264432] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:22:09.523   06:32:26	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:09.523   06:32:26	-- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009
00:22:09.523   06:32:26	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:09.523   06:32:26	-- common/autotest_common.sh@10 -- # set +x
00:22:09.523  [2024-12-16 06:32:26.272568] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 ***
00:22:09.523   06:32:26	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:09.523   06:32:26	-- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512
00:22:09.523   06:32:26	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:09.523   06:32:26	-- common/autotest_common.sh@10 -- # set +x
00:22:09.523  null0
00:22:09.523   06:32:26	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:09.523   06:32:26	-- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512
00:22:09.523   06:32:26	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:09.523   06:32:26	-- common/autotest_common.sh@10 -- # set +x
00:22:09.523  null1
00:22:09.523   06:32:26	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:09.523   06:32:26	-- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine
00:22:09.523   06:32:26	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:09.523   06:32:26	-- common/autotest_common.sh@10 -- # set +x
00:22:09.523   06:32:26	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:09.523   06:32:26	-- host/discovery.sh@45 -- # hostpid=85520
00:22:09.523   06:32:26	-- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock
00:22:09.523   06:32:26	-- host/discovery.sh@46 -- # waitforlisten 85520 /tmp/host.sock
00:22:09.523   06:32:26	-- common/autotest_common.sh@829 -- # '[' -z 85520 ']'
00:22:09.523   06:32:26	-- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock
00:22:09.523   06:32:26	-- common/autotest_common.sh@834 -- # local max_retries=100
00:22:09.523  Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...
00:22:09.523   06:32:26	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...'
00:22:09.523   06:32:26	-- common/autotest_common.sh@838 -- # xtrace_disable
00:22:09.523   06:32:26	-- common/autotest_common.sh@10 -- # set +x
00:22:09.523  [2024-12-16 06:32:26.364545] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:22:09.523  [2024-12-16 06:32:26.364648] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85520 ]
00:22:09.782  [2024-12-16 06:32:26.502714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:09.782  [2024-12-16 06:32:26.595691] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:22:09.782  [2024-12-16 06:32:26.595864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:22:10.350   06:32:27	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:22:10.350   06:32:27	-- common/autotest_common.sh@862 -- # return 0
00:22:10.350   06:32:27	-- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:22:10.350   06:32:27	-- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme
00:22:10.350   06:32:27	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:10.350   06:32:27	-- common/autotest_common.sh@10 -- # set +x
00:22:10.350   06:32:27	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:10.350   06:32:27	-- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test
00:22:10.350   06:32:27	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:10.350   06:32:27	-- common/autotest_common.sh@10 -- # set +x
00:22:10.350   06:32:27	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:10.350   06:32:27	-- host/discovery.sh@72 -- # notify_id=0
00:22:10.350    06:32:27	-- host/discovery.sh@78 -- # get_subsystem_names
00:22:10.350    06:32:27	-- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:22:10.350    06:32:27	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:10.350    06:32:27	-- host/discovery.sh@59 -- # jq -r '.[].name'
00:22:10.350    06:32:27	-- common/autotest_common.sh@10 -- # set +x
00:22:10.350    06:32:27	-- host/discovery.sh@59 -- # sort
00:22:10.350    06:32:27	-- host/discovery.sh@59 -- # xargs
00:22:10.609    06:32:27	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:10.609   06:32:27	-- host/discovery.sh@78 -- # [[ '' == '' ]]
00:22:10.609    06:32:27	-- host/discovery.sh@79 -- # get_bdev_list
00:22:10.609    06:32:27	-- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:10.609    06:32:27	-- host/discovery.sh@55 -- # jq -r '.[].name'
00:22:10.609    06:32:27	-- host/discovery.sh@55 -- # xargs
00:22:10.609    06:32:27	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:10.609    06:32:27	-- host/discovery.sh@55 -- # sort
00:22:10.609    06:32:27	-- common/autotest_common.sh@10 -- # set +x
00:22:10.609    06:32:27	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:10.609   06:32:27	-- host/discovery.sh@79 -- # [[ '' == '' ]]
00:22:10.609   06:32:27	-- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0
00:22:10.609   06:32:27	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:10.609   06:32:27	-- common/autotest_common.sh@10 -- # set +x
00:22:10.609   06:32:27	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:10.609    06:32:27	-- host/discovery.sh@82 -- # get_subsystem_names
00:22:10.609    06:32:27	-- host/discovery.sh@59 -- # jq -r '.[].name'
00:22:10.609    06:32:27	-- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:22:10.609    06:32:27	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:10.609    06:32:27	-- common/autotest_common.sh@10 -- # set +x
00:22:10.609    06:32:27	-- host/discovery.sh@59 -- # sort
00:22:10.609    06:32:27	-- host/discovery.sh@59 -- # xargs
00:22:10.609    06:32:27	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:10.609   06:32:27	-- host/discovery.sh@82 -- # [[ '' == '' ]]
00:22:10.609    06:32:27	-- host/discovery.sh@83 -- # get_bdev_list
00:22:10.609    06:32:27	-- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:10.609    06:32:27	-- host/discovery.sh@55 -- # jq -r '.[].name'
00:22:10.609    06:32:27	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:10.609    06:32:27	-- host/discovery.sh@55 -- # sort
00:22:10.609    06:32:27	-- host/discovery.sh@55 -- # xargs
00:22:10.609    06:32:27	-- common/autotest_common.sh@10 -- # set +x
00:22:10.609    06:32:27	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:10.609   06:32:27	-- host/discovery.sh@83 -- # [[ '' == '' ]]
00:22:10.609   06:32:27	-- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0
00:22:10.609   06:32:27	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:10.609   06:32:27	-- common/autotest_common.sh@10 -- # set +x
00:22:10.609   06:32:27	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:10.609    06:32:27	-- host/discovery.sh@86 -- # get_subsystem_names
00:22:10.609    06:32:27	-- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:22:10.609    06:32:27	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:10.609    06:32:27	-- host/discovery.sh@59 -- # jq -r '.[].name'
00:22:10.609    06:32:27	-- common/autotest_common.sh@10 -- # set +x
00:22:10.609    06:32:27	-- host/discovery.sh@59 -- # sort
00:22:10.609    06:32:27	-- host/discovery.sh@59 -- # xargs
00:22:10.609    06:32:27	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:10.868   06:32:27	-- host/discovery.sh@86 -- # [[ '' == '' ]]
00:22:10.868    06:32:27	-- host/discovery.sh@87 -- # get_bdev_list
00:22:10.868    06:32:27	-- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:10.868    06:32:27	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:10.868    06:32:27	-- common/autotest_common.sh@10 -- # set +x
00:22:10.868    06:32:27	-- host/discovery.sh@55 -- # jq -r '.[].name'
00:22:10.868    06:32:27	-- host/discovery.sh@55 -- # sort
00:22:10.868    06:32:27	-- host/discovery.sh@55 -- # xargs
00:22:10.868    06:32:27	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:10.868   06:32:27	-- host/discovery.sh@87 -- # [[ '' == '' ]]
00:22:10.868   06:32:27	-- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:22:10.868   06:32:27	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:10.868   06:32:27	-- common/autotest_common.sh@10 -- # set +x
00:22:10.868  [2024-12-16 06:32:27.676925] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:22:10.868   06:32:27	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:10.868    06:32:27	-- host/discovery.sh@92 -- # get_subsystem_names
00:22:10.868    06:32:27	-- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:22:10.868    06:32:27	-- host/discovery.sh@59 -- # jq -r '.[].name'
00:22:10.868    06:32:27	-- host/discovery.sh@59 -- # sort
00:22:10.868    06:32:27	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:10.868    06:32:27	-- common/autotest_common.sh@10 -- # set +x
00:22:10.868    06:32:27	-- host/discovery.sh@59 -- # xargs
00:22:10.868    06:32:27	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:10.868   06:32:27	-- host/discovery.sh@92 -- # [[ '' == '' ]]
00:22:10.868    06:32:27	-- host/discovery.sh@93 -- # get_bdev_list
00:22:10.868    06:32:27	-- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:10.868    06:32:27	-- host/discovery.sh@55 -- # jq -r '.[].name'
00:22:10.868    06:32:27	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:10.868    06:32:27	-- host/discovery.sh@55 -- # sort
00:22:10.868    06:32:27	-- common/autotest_common.sh@10 -- # set +x
00:22:10.868    06:32:27	-- host/discovery.sh@55 -- # xargs
00:22:10.868    06:32:27	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:10.868   06:32:27	-- host/discovery.sh@93 -- # [[ '' == '' ]]
00:22:10.868   06:32:27	-- host/discovery.sh@94 -- # get_notification_count
00:22:10.868    06:32:27	-- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0
00:22:10.868    06:32:27	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:10.868    06:32:27	-- common/autotest_common.sh@10 -- # set +x
00:22:10.868    06:32:27	-- host/discovery.sh@74 -- # jq '. | length'
00:22:10.868    06:32:27	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:11.127   06:32:27	-- host/discovery.sh@74 -- # notification_count=0
00:22:11.127   06:32:27	-- host/discovery.sh@75 -- # notify_id=0
00:22:11.127   06:32:27	-- host/discovery.sh@95 -- # [[ 0 == 0 ]]
00:22:11.127   06:32:27	-- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test
00:22:11.127   06:32:27	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:11.127   06:32:27	-- common/autotest_common.sh@10 -- # set +x
00:22:11.127   06:32:27	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:11.127   06:32:27	-- host/discovery.sh@100 -- # sleep 1
00:22:11.385  [2024-12-16 06:32:28.317297] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached
00:22:11.385  [2024-12-16 06:32:28.317325] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected
00:22:11.385  [2024-12-16 06:32:28.317343] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command
00:22:11.644  [2024-12-16 06:32:28.403384] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0
00:22:11.644  [2024-12-16 06:32:28.458940] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done
00:22:11.644  [2024-12-16 06:32:28.458967] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again
00:22:11.902    06:32:28	-- host/discovery.sh@101 -- # get_subsystem_names
00:22:11.902    06:32:28	-- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:22:11.902    06:32:28	-- host/discovery.sh@59 -- # jq -r '.[].name'
00:22:11.902    06:32:28	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:11.902    06:32:28	-- common/autotest_common.sh@10 -- # set +x
00:22:11.902    06:32:28	-- host/discovery.sh@59 -- # sort
00:22:11.902    06:32:28	-- host/discovery.sh@59 -- # xargs
00:22:11.902    06:32:28	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:12.161   06:32:28	-- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:12.161    06:32:28	-- host/discovery.sh@102 -- # get_bdev_list
00:22:12.161    06:32:28	-- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:12.161    06:32:28	-- host/discovery.sh@55 -- # jq -r '.[].name'
00:22:12.161    06:32:28	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:12.161    06:32:28	-- host/discovery.sh@55 -- # sort
00:22:12.161    06:32:28	-- host/discovery.sh@55 -- # xargs
00:22:12.161    06:32:28	-- common/autotest_common.sh@10 -- # set +x
00:22:12.161    06:32:28	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:12.161   06:32:28	-- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]]
00:22:12.161    06:32:28	-- host/discovery.sh@103 -- # get_subsystem_paths nvme0
00:22:12.161    06:32:28	-- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0
00:22:12.161    06:32:28	-- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:22:12.161    06:32:28	-- host/discovery.sh@63 -- # sort -n
00:22:12.161    06:32:28	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:12.161    06:32:28	-- host/discovery.sh@63 -- # xargs
00:22:12.161    06:32:28	-- common/autotest_common.sh@10 -- # set +x
00:22:12.161    06:32:28	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:12.161   06:32:29	-- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]]
00:22:12.161   06:32:29	-- host/discovery.sh@104 -- # get_notification_count
00:22:12.161    06:32:29	-- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0
00:22:12.161    06:32:29	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:12.161    06:32:29	-- common/autotest_common.sh@10 -- # set +x
00:22:12.161    06:32:29	-- host/discovery.sh@74 -- # jq '. | length'
00:22:12.161    06:32:29	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:12.161   06:32:29	-- host/discovery.sh@74 -- # notification_count=1
00:22:12.161   06:32:29	-- host/discovery.sh@75 -- # notify_id=1
00:22:12.161   06:32:29	-- host/discovery.sh@105 -- # [[ 1 == 1 ]]
00:22:12.161   06:32:29	-- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1
00:22:12.161   06:32:29	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:12.161   06:32:29	-- common/autotest_common.sh@10 -- # set +x
00:22:12.161   06:32:29	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:12.161   06:32:29	-- host/discovery.sh@109 -- # sleep 1
00:22:13.538    06:32:30	-- host/discovery.sh@110 -- # get_bdev_list
00:22:13.538    06:32:30	-- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:13.538    06:32:30	-- host/discovery.sh@55 -- # jq -r '.[].name'
00:22:13.538    06:32:30	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:13.538    06:32:30	-- host/discovery.sh@55 -- # sort
00:22:13.538    06:32:30	-- common/autotest_common.sh@10 -- # set +x
00:22:13.538    06:32:30	-- host/discovery.sh@55 -- # xargs
00:22:13.538    06:32:30	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:13.538   06:32:30	-- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]]
00:22:13.538   06:32:30	-- host/discovery.sh@111 -- # get_notification_count
00:22:13.538    06:32:30	-- host/discovery.sh@74 -- # jq '. | length'
00:22:13.538    06:32:30	-- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1
00:22:13.538    06:32:30	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:13.538    06:32:30	-- common/autotest_common.sh@10 -- # set +x
00:22:13.538    06:32:30	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:13.538   06:32:30	-- host/discovery.sh@74 -- # notification_count=1
00:22:13.538   06:32:30	-- host/discovery.sh@75 -- # notify_id=2
00:22:13.538   06:32:30	-- host/discovery.sh@112 -- # [[ 1 == 1 ]]
00:22:13.538   06:32:30	-- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421
00:22:13.538   06:32:30	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:13.538   06:32:30	-- common/autotest_common.sh@10 -- # set +x
00:22:13.538  [2024-12-16 06:32:30.209752] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 ***
00:22:13.538  [2024-12-16 06:32:30.210747] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer
00:22:13.538  [2024-12-16 06:32:30.210986] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command
00:22:13.538   06:32:30	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:13.538   06:32:30	-- host/discovery.sh@117 -- # sleep 1
00:22:13.538  [2024-12-16 06:32:30.296768] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0
00:22:13.538  [2024-12-16 06:32:30.360232] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done
00:22:13.538  [2024-12-16 06:32:30.360256] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again
00:22:13.538  [2024-12-16 06:32:30.360263] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again
00:22:14.474    06:32:31	-- host/discovery.sh@118 -- # get_subsystem_names
00:22:14.474    06:32:31	-- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:22:14.474    06:32:31	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:14.474    06:32:31	-- host/discovery.sh@59 -- # jq -r '.[].name'
00:22:14.474    06:32:31	-- common/autotest_common.sh@10 -- # set +x
00:22:14.474    06:32:31	-- host/discovery.sh@59 -- # xargs
00:22:14.474    06:32:31	-- host/discovery.sh@59 -- # sort
00:22:14.474    06:32:31	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:14.474   06:32:31	-- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:14.474    06:32:31	-- host/discovery.sh@119 -- # get_bdev_list
00:22:14.474    06:32:31	-- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:14.474    06:32:31	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:14.474    06:32:31	-- common/autotest_common.sh@10 -- # set +x
00:22:14.475    06:32:31	-- host/discovery.sh@55 -- # jq -r '.[].name'
00:22:14.475    06:32:31	-- host/discovery.sh@55 -- # sort
00:22:14.475    06:32:31	-- host/discovery.sh@55 -- # xargs
00:22:14.475    06:32:31	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:14.475   06:32:31	-- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]]
00:22:14.475    06:32:31	-- host/discovery.sh@120 -- # get_subsystem_paths nvme0
00:22:14.475    06:32:31	-- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0
00:22:14.475    06:32:31	-- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:22:14.475    06:32:31	-- host/discovery.sh@63 -- # sort -n
00:22:14.475    06:32:31	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:14.475    06:32:31	-- common/autotest_common.sh@10 -- # set +x
00:22:14.475    06:32:31	-- host/discovery.sh@63 -- # xargs
00:22:14.475    06:32:31	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:14.475   06:32:31	-- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]]
00:22:14.475   06:32:31	-- host/discovery.sh@121 -- # get_notification_count
00:22:14.475    06:32:31	-- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2
00:22:14.475    06:32:31	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:14.475    06:32:31	-- common/autotest_common.sh@10 -- # set +x
00:22:14.475    06:32:31	-- host/discovery.sh@74 -- # jq '. | length'
00:22:14.475    06:32:31	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:14.475   06:32:31	-- host/discovery.sh@74 -- # notification_count=0
00:22:14.475   06:32:31	-- host/discovery.sh@75 -- # notify_id=2
00:22:14.475   06:32:31	-- host/discovery.sh@122 -- # [[ 0 == 0 ]]
00:22:14.475   06:32:31	-- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:22:14.475   06:32:31	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:14.475   06:32:31	-- common/autotest_common.sh@10 -- # set +x
00:22:14.475  [2024-12-16 06:32:31.443053] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer
00:22:14.475  [2024-12-16 06:32:31.443231] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command
00:22:14.475   06:32:31	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:14.475   06:32:31	-- host/discovery.sh@127 -- # sleep 1
00:22:14.475  [2024-12-16 06:32:31.447356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:14.475  [2024-12-16 06:32:31.447390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:14.475  [2024-12-16 06:32:31.447403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:14.475  [2024-12-16 06:32:31.447413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:14.475  [2024-12-16 06:32:31.447422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:14.475  [2024-12-16 06:32:31.447431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:14.475  [2024-12-16 06:32:31.447442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:14.475  [2024-12-16 06:32:31.447451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:14.475  [2024-12-16 06:32:31.447460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5e9c0 is same with the state(5) to be set
00:22:14.734  [2024-12-16 06:32:31.457291] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5e9c0 (9): Bad file descriptor
00:22:14.734  [2024-12-16 06:32:31.467310] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller
00:22:14.734  [2024-12-16 06:32:31.467419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:14.734  [2024-12-16 06:32:31.467472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:14.734  [2024-12-16 06:32:31.467564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5e9c0 with addr=10.0.0.2, port=4420
00:22:14.734  [2024-12-16 06:32:31.467580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5e9c0 is same with the state(5) to be set
00:22:14.734  [2024-12-16 06:32:31.467599] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5e9c0 (9): Bad file descriptor
00:22:14.734  [2024-12-16 06:32:31.467616] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state
00:22:14.734  [2024-12-16 06:32:31.467628] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed
00:22:14.734  [2024-12-16 06:32:31.467640] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state.
00:22:14.734  [2024-12-16 06:32:31.467658] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:22:14.734  [2024-12-16 06:32:31.477370] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller
00:22:14.734  [2024-12-16 06:32:31.477458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:14.734  [2024-12-16 06:32:31.477564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:14.734  [2024-12-16 06:32:31.477586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5e9c0 with addr=10.0.0.2, port=4420
00:22:14.734  [2024-12-16 06:32:31.477597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5e9c0 is same with the state(5) to be set
00:22:14.734  [2024-12-16 06:32:31.477630] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5e9c0 (9): Bad file descriptor
00:22:14.734  [2024-12-16 06:32:31.477663] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state
00:22:14.734  [2024-12-16 06:32:31.477689] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed
00:22:14.734  [2024-12-16 06:32:31.477701] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state.
00:22:14.734  [2024-12-16 06:32:31.477718] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:22:14.734  [2024-12-16 06:32:31.487423] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller
00:22:14.734  [2024-12-16 06:32:31.487714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:14.734  [2024-12-16 06:32:31.487771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:14.734  [2024-12-16 06:32:31.487792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5e9c0 with addr=10.0.0.2, port=4420
00:22:14.734  [2024-12-16 06:32:31.487803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5e9c0 is same with the state(5) to be set
00:22:14.734  [2024-12-16 06:32:31.487821] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5e9c0 (9): Bad file descriptor
00:22:14.734  [2024-12-16 06:32:31.487838] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state
00:22:14.734  [2024-12-16 06:32:31.487847] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed
00:22:14.734  [2024-12-16 06:32:31.487857] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state.
00:22:14.734  [2024-12-16 06:32:31.487874] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:22:14.734  [2024-12-16 06:32:31.497668] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller
00:22:14.734  [2024-12-16 06:32:31.497756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:14.734  [2024-12-16 06:32:31.497805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:14.734  [2024-12-16 06:32:31.497824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5e9c0 with addr=10.0.0.2, port=4420
00:22:14.734  [2024-12-16 06:32:31.497835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5e9c0 is same with the state(5) to be set
00:22:14.734  [2024-12-16 06:32:31.497851] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5e9c0 (9): Bad file descriptor
00:22:14.734  [2024-12-16 06:32:31.497865] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state
00:22:14.734  [2024-12-16 06:32:31.497874] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed
00:22:14.735  [2024-12-16 06:32:31.497883] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state.
00:22:14.735  [2024-12-16 06:32:31.497897] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:22:14.735  [2024-12-16 06:32:31.507722] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller
00:22:14.735  [2024-12-16 06:32:31.507803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:14.735  [2024-12-16 06:32:31.507851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:14.735  [2024-12-16 06:32:31.507869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5e9c0 with addr=10.0.0.2, port=4420
00:22:14.735  [2024-12-16 06:32:31.507880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5e9c0 is same with the state(5) to be set
00:22:14.735  [2024-12-16 06:32:31.507896] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5e9c0 (9): Bad file descriptor
00:22:14.735  [2024-12-16 06:32:31.507910] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state
00:22:14.735  [2024-12-16 06:32:31.507919] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed
00:22:14.735  [2024-12-16 06:32:31.507928] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state.
00:22:14.735  [2024-12-16 06:32:31.507942] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:22:14.735  [2024-12-16 06:32:31.517772] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller
00:22:14.735  [2024-12-16 06:32:31.517874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:14.735  [2024-12-16 06:32:31.517923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:14.735  [2024-12-16 06:32:31.517941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5e9c0 with addr=10.0.0.2, port=4420
00:22:14.735  [2024-12-16 06:32:31.517952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5e9c0 is same with the state(5) to be set
00:22:14.735  [2024-12-16 06:32:31.517968] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5e9c0 (9): Bad file descriptor
00:22:14.735  [2024-12-16 06:32:31.517983] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state
00:22:14.735  [2024-12-16 06:32:31.517992] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed
00:22:14.735  [2024-12-16 06:32:31.518000] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state.
00:22:14.735  [2024-12-16 06:32:31.518014] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:22:14.735  [2024-12-16 06:32:31.527838] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller
00:22:14.735  [2024-12-16 06:32:31.527919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:14.735  [2024-12-16 06:32:31.527966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:14.735  [2024-12-16 06:32:31.527984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5e9c0 with addr=10.0.0.2, port=4420
00:22:14.735  [2024-12-16 06:32:31.527995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5e9c0 is same with the state(5) to be set
00:22:14.735  [2024-12-16 06:32:31.528011] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5e9c0 (9): Bad file descriptor
00:22:14.735  [2024-12-16 06:32:31.528025] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state
00:22:14.735  [2024-12-16 06:32:31.528034] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed
00:22:14.735  [2024-12-16 06:32:31.528043] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state.
00:22:14.735  [2024-12-16 06:32:31.528057] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:22:14.735  [2024-12-16 06:32:31.529124] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found
00:22:14.735  [2024-12-16 06:32:31.529154] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again
00:22:15.685    06:32:32	-- host/discovery.sh@128 -- # get_subsystem_names
00:22:15.685    06:32:32	-- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:22:15.685    06:32:32	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:15.685    06:32:32	-- host/discovery.sh@59 -- # jq -r '.[].name'
00:22:15.685    06:32:32	-- common/autotest_common.sh@10 -- # set +x
00:22:15.685    06:32:32	-- host/discovery.sh@59 -- # sort
00:22:15.685    06:32:32	-- host/discovery.sh@59 -- # xargs
00:22:15.685    06:32:32	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:15.685   06:32:32	-- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:15.685    06:32:32	-- host/discovery.sh@129 -- # get_bdev_list
00:22:15.685    06:32:32	-- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:15.685    06:32:32	-- host/discovery.sh@55 -- # jq -r '.[].name'
00:22:15.685    06:32:32	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:15.685    06:32:32	-- host/discovery.sh@55 -- # sort
00:22:15.685    06:32:32	-- common/autotest_common.sh@10 -- # set +x
00:22:15.685    06:32:32	-- host/discovery.sh@55 -- # xargs
00:22:15.685    06:32:32	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:15.685   06:32:32	-- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]]
00:22:15.685    06:32:32	-- host/discovery.sh@130 -- # get_subsystem_paths nvme0
00:22:15.685    06:32:32	-- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0
00:22:15.685    06:32:32	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:15.685    06:32:32	-- common/autotest_common.sh@10 -- # set +x
00:22:15.685    06:32:32	-- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:22:15.685    06:32:32	-- host/discovery.sh@63 -- # sort -n
00:22:15.685    06:32:32	-- host/discovery.sh@63 -- # xargs
00:22:15.685    06:32:32	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:15.685   06:32:32	-- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]]
00:22:15.685   06:32:32	-- host/discovery.sh@131 -- # get_notification_count
00:22:15.685    06:32:32	-- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2
00:22:15.685    06:32:32	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:15.685    06:32:32	-- host/discovery.sh@74 -- # jq '. | length'
00:22:15.685    06:32:32	-- common/autotest_common.sh@10 -- # set +x
00:22:15.685    06:32:32	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:15.969   06:32:32	-- host/discovery.sh@74 -- # notification_count=0
00:22:15.969   06:32:32	-- host/discovery.sh@75 -- # notify_id=2
00:22:15.969   06:32:32	-- host/discovery.sh@132 -- # [[ 0 == 0 ]]
00:22:15.969   06:32:32	-- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme
00:22:15.969   06:32:32	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:15.969   06:32:32	-- common/autotest_common.sh@10 -- # set +x
00:22:15.969   06:32:32	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:15.969   06:32:32	-- host/discovery.sh@135 -- # sleep 1
00:22:16.905    06:32:33	-- host/discovery.sh@136 -- # get_subsystem_names
00:22:16.905    06:32:33	-- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:22:16.905    06:32:33	-- host/discovery.sh@59 -- # jq -r '.[].name'
00:22:16.905    06:32:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:16.905    06:32:33	-- common/autotest_common.sh@10 -- # set +x
00:22:16.905    06:32:33	-- host/discovery.sh@59 -- # xargs
00:22:16.905    06:32:33	-- host/discovery.sh@59 -- # sort
00:22:16.905    06:32:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:16.905   06:32:33	-- host/discovery.sh@136 -- # [[ '' == '' ]]
00:22:16.905    06:32:33	-- host/discovery.sh@137 -- # get_bdev_list
00:22:16.905    06:32:33	-- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:16.905    06:32:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:16.905    06:32:33	-- common/autotest_common.sh@10 -- # set +x
00:22:16.905    06:32:33	-- host/discovery.sh@55 -- # jq -r '.[].name'
00:22:16.905    06:32:33	-- host/discovery.sh@55 -- # sort
00:22:16.905    06:32:33	-- host/discovery.sh@55 -- # xargs
00:22:16.905    06:32:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:16.905   06:32:33	-- host/discovery.sh@137 -- # [[ '' == '' ]]
00:22:16.905   06:32:33	-- host/discovery.sh@138 -- # get_notification_count
00:22:16.905    06:32:33	-- host/discovery.sh@74 -- # jq '. | length'
00:22:16.905    06:32:33	-- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2
00:22:16.905    06:32:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:16.905    06:32:33	-- common/autotest_common.sh@10 -- # set +x
00:22:16.905    06:32:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:16.905   06:32:33	-- host/discovery.sh@74 -- # notification_count=2
00:22:16.905   06:32:33	-- host/discovery.sh@75 -- # notify_id=4
00:22:16.905   06:32:33	-- host/discovery.sh@139 -- # [[ 2 == 2 ]]
00:22:16.905   06:32:33	-- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:22:16.905   06:32:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:16.905   06:32:33	-- common/autotest_common.sh@10 -- # set +x
00:22:18.281  [2024-12-16 06:32:34.868060] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached
00:22:18.281  [2024-12-16 06:32:34.868228] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected
00:22:18.281  [2024-12-16 06:32:34.868290] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command
00:22:18.281  [2024-12-16 06:32:34.954156] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0
00:22:18.281  [2024-12-16 06:32:35.013589] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done
00:22:18.281  [2024-12-16 06:32:35.013759] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again
00:22:18.281   06:32:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:18.281   06:32:35	-- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:22:18.282   06:32:35	-- common/autotest_common.sh@650 -- # local es=0
00:22:18.282   06:32:35	-- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:22:18.282   06:32:35	-- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:22:18.282   06:32:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:22:18.282    06:32:35	-- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:22:18.282   06:32:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:22:18.282   06:32:35	-- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:22:18.282   06:32:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:18.282   06:32:35	-- common/autotest_common.sh@10 -- # set +x
00:22:18.282  2024/12/16 06:32:35 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists
00:22:18.282  request:
00:22:18.282  {
00:22:18.282  "method": "bdev_nvme_start_discovery",
00:22:18.282  "params": {
00:22:18.282  "name": "nvme",
00:22:18.282  "trtype": "tcp",
00:22:18.282  "traddr": "10.0.0.2",
00:22:18.282  "hostnqn": "nqn.2021-12.io.spdk:test",
00:22:18.282  "adrfam": "ipv4",
00:22:18.282  "trsvcid": "8009",
00:22:18.282  "wait_for_attach": true
00:22:18.282  }
00:22:18.282  }
00:22:18.282  Got JSON-RPC error response
00:22:18.282  GoRPCClient: error on JSON-RPC call
00:22:18.282   06:32:35	-- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:22:18.282   06:32:35	-- common/autotest_common.sh@653 -- # es=1
00:22:18.282   06:32:35	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:22:18.282   06:32:35	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:22:18.282   06:32:35	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:22:18.282    06:32:35	-- host/discovery.sh@146 -- # get_discovery_ctrlrs
00:22:18.282    06:32:35	-- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info
00:22:18.282    06:32:35	-- host/discovery.sh@67 -- # jq -r '.[].name'
00:22:18.282    06:32:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:18.282    06:32:35	-- common/autotest_common.sh@10 -- # set +x
00:22:18.282    06:32:35	-- host/discovery.sh@67 -- # sort
00:22:18.282    06:32:35	-- host/discovery.sh@67 -- # xargs
00:22:18.282    06:32:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:18.282   06:32:35	-- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]]
00:22:18.282    06:32:35	-- host/discovery.sh@147 -- # get_bdev_list
00:22:18.282    06:32:35	-- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:18.282    06:32:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:18.282    06:32:35	-- common/autotest_common.sh@10 -- # set +x
00:22:18.282    06:32:35	-- host/discovery.sh@55 -- # sort
00:22:18.282    06:32:35	-- host/discovery.sh@55 -- # xargs
00:22:18.282    06:32:35	-- host/discovery.sh@55 -- # jq -r '.[].name'
00:22:18.282    06:32:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:18.282   06:32:35	-- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]]
00:22:18.282   06:32:35	-- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:22:18.282   06:32:35	-- common/autotest_common.sh@650 -- # local es=0
00:22:18.282   06:32:35	-- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:22:18.282   06:32:35	-- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:22:18.282   06:32:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:22:18.282    06:32:35	-- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:22:18.282   06:32:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:22:18.282   06:32:35	-- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:22:18.282   06:32:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:18.282   06:32:35	-- common/autotest_common.sh@10 -- # set +x
00:22:18.282  2024/12/16 06:32:35 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists
00:22:18.282  request:
00:22:18.282  {
00:22:18.282  "method": "bdev_nvme_start_discovery",
00:22:18.282  "params": {
00:22:18.282  "name": "nvme_second",
00:22:18.282  "trtype": "tcp",
00:22:18.282  "traddr": "10.0.0.2",
00:22:18.282  "hostnqn": "nqn.2021-12.io.spdk:test",
00:22:18.282  "adrfam": "ipv4",
00:22:18.282  "trsvcid": "8009",
00:22:18.282  "wait_for_attach": true
00:22:18.282  }
00:22:18.282  }
00:22:18.282  Got JSON-RPC error response
00:22:18.282  GoRPCClient: error on JSON-RPC call
00:22:18.282   06:32:35	-- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:22:18.282   06:32:35	-- common/autotest_common.sh@653 -- # es=1
00:22:18.282   06:32:35	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:22:18.282   06:32:35	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:22:18.282   06:32:35	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:22:18.282    06:32:35	-- host/discovery.sh@152 -- # get_discovery_ctrlrs
00:22:18.282    06:32:35	-- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info
00:22:18.282    06:32:35	-- host/discovery.sh@67 -- # jq -r '.[].name'
00:22:18.282    06:32:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:18.282    06:32:35	-- host/discovery.sh@67 -- # sort
00:22:18.282    06:32:35	-- common/autotest_common.sh@10 -- # set +x
00:22:18.282    06:32:35	-- host/discovery.sh@67 -- # xargs
00:22:18.282    06:32:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:18.282   06:32:35	-- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]]
00:22:18.282    06:32:35	-- host/discovery.sh@153 -- # get_bdev_list
00:22:18.282    06:32:35	-- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:18.282    06:32:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:18.282    06:32:35	-- common/autotest_common.sh@10 -- # set +x
00:22:18.282    06:32:35	-- host/discovery.sh@55 -- # jq -r '.[].name'
00:22:18.282    06:32:35	-- host/discovery.sh@55 -- # sort
00:22:18.282    06:32:35	-- host/discovery.sh@55 -- # xargs
00:22:18.540    06:32:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:18.540   06:32:35	-- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]]
00:22:18.540   06:32:35	-- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000
00:22:18.540   06:32:35	-- common/autotest_common.sh@650 -- # local es=0
00:22:18.540   06:32:35	-- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000
00:22:18.540   06:32:35	-- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:22:18.540   06:32:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:22:18.540    06:32:35	-- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:22:18.540   06:32:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:22:18.540   06:32:35	-- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000
00:22:18.540   06:32:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:18.540   06:32:35	-- common/autotest_common.sh@10 -- # set +x
00:22:19.482  [2024-12-16 06:32:36.291386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:19.482  [2024-12-16 06:32:36.291502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:19.482  [2024-12-16 06:32:36.291542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5a970 with addr=10.0.0.2, port=8010
00:22:19.482  [2024-12-16 06:32:36.291560] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:22:19.482  [2024-12-16 06:32:36.291571] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:22:19.482  [2024-12-16 06:32:36.291582] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect
00:22:20.418  [2024-12-16 06:32:37.291363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:20.418  [2024-12-16 06:32:37.291439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:22:20.418  [2024-12-16 06:32:37.291459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5a970 with addr=10.0.0.2, port=8010
00:22:20.418  [2024-12-16 06:32:37.291474] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:22:20.418  [2024-12-16 06:32:37.291483] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:22:20.418  [2024-12-16 06:32:37.291506] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect
00:22:21.353  [2024-12-16 06:32:38.291293] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr
00:22:21.353  2024/12/16 06:32:38 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out
00:22:21.353  request:
00:22:21.353  {
00:22:21.353  "method": "bdev_nvme_start_discovery",
00:22:21.353  "params": {
00:22:21.353  "name": "nvme_second",
00:22:21.353  "trtype": "tcp",
00:22:21.353  "traddr": "10.0.0.2",
00:22:21.353  "hostnqn": "nqn.2021-12.io.spdk:test",
00:22:21.353  "adrfam": "ipv4",
00:22:21.353  "trsvcid": "8010",
00:22:21.353  "attach_timeout_ms": 3000
00:22:21.353  }
00:22:21.353  }
00:22:21.353  Got JSON-RPC error response
00:22:21.353  GoRPCClient: error on JSON-RPC call
00:22:21.353   06:32:38	-- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:22:21.353   06:32:38	-- common/autotest_common.sh@653 -- # es=1
00:22:21.353   06:32:38	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:22:21.353   06:32:38	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:22:21.353   06:32:38	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:22:21.353    06:32:38	-- host/discovery.sh@158 -- # get_discovery_ctrlrs
00:22:21.353    06:32:38	-- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info
00:22:21.353    06:32:38	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:21.353    06:32:38	-- common/autotest_common.sh@10 -- # set +x
00:22:21.353    06:32:38	-- host/discovery.sh@67 -- # jq -r '.[].name'
00:22:21.353    06:32:38	-- host/discovery.sh@67 -- # sort
00:22:21.353    06:32:38	-- host/discovery.sh@67 -- # xargs
00:22:21.353    06:32:38	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:21.612   06:32:38	-- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]]
00:22:21.612   06:32:38	-- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT
00:22:21.612   06:32:38	-- host/discovery.sh@162 -- # kill 85520
00:22:21.612   06:32:38	-- host/discovery.sh@163 -- # nvmftestfini
00:22:21.612   06:32:38	-- nvmf/common.sh@476 -- # nvmfcleanup
00:22:21.612   06:32:38	-- nvmf/common.sh@116 -- # sync
00:22:21.612   06:32:38	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:22:21.612   06:32:38	-- nvmf/common.sh@119 -- # set +e
00:22:21.612   06:32:38	-- nvmf/common.sh@120 -- # for i in {1..20}
00:22:21.612   06:32:38	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:22:21.612  rmmod nvme_tcp
00:22:21.612  rmmod nvme_fabrics
00:22:21.612  rmmod nvme_keyring
00:22:21.612   06:32:38	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:22:21.613   06:32:38	-- nvmf/common.sh@123 -- # set -e
00:22:21.613   06:32:38	-- nvmf/common.sh@124 -- # return 0
00:22:21.613   06:32:38	-- nvmf/common.sh@477 -- # '[' -n 85470 ']'
00:22:21.613   06:32:38	-- nvmf/common.sh@478 -- # killprocess 85470
00:22:21.613   06:32:38	-- common/autotest_common.sh@936 -- # '[' -z 85470 ']'
00:22:21.613   06:32:38	-- common/autotest_common.sh@940 -- # kill -0 85470
00:22:21.613    06:32:38	-- common/autotest_common.sh@941 -- # uname
00:22:21.613   06:32:38	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:22:21.613    06:32:38	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85470
00:22:21.613   06:32:38	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:22:21.613  killing process with pid 85470
00:22:21.613   06:32:38	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:22:21.613   06:32:38	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 85470'
00:22:21.613   06:32:38	-- common/autotest_common.sh@955 -- # kill 85470
00:22:21.613   06:32:38	-- common/autotest_common.sh@960 -- # wait 85470
00:22:21.871   06:32:38	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:22:21.871   06:32:38	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:22:21.871   06:32:38	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:22:21.871   06:32:38	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:22:21.871   06:32:38	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:22:21.872   06:32:38	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:22:21.872   06:32:38	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:22:21.872    06:32:38	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:22:21.872   06:32:38	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:22:21.872  
00:22:21.872  real	0m14.101s
00:22:21.872  user	0m27.637s
00:22:21.872  sys	0m1.675s
00:22:21.872   06:32:38	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:22:21.872   06:32:38	-- common/autotest_common.sh@10 -- # set +x
00:22:21.872  ************************************
00:22:21.872  END TEST nvmf_discovery
00:22:21.872  ************************************
00:22:21.872   06:32:38	-- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp
00:22:21.872   06:32:38	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:22:21.872   06:32:38	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:22:21.872   06:32:38	-- common/autotest_common.sh@10 -- # set +x
00:22:21.872  ************************************
00:22:21.872  START TEST nvmf_discovery_remove_ifc
00:22:21.872  ************************************
00:22:21.872   06:32:38	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp
00:22:22.131  * Looking for test storage...
00:22:22.131  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:22:22.131    06:32:38	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:22:22.131     06:32:38	-- common/autotest_common.sh@1690 -- # lcov --version
00:22:22.131     06:32:38	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:22:22.131    06:32:39	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:22:22.131    06:32:39	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:22:22.131    06:32:39	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:22:22.131    06:32:39	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:22:22.131    06:32:39	-- scripts/common.sh@335 -- # IFS=.-:
00:22:22.131    06:32:39	-- scripts/common.sh@335 -- # read -ra ver1
00:22:22.131    06:32:39	-- scripts/common.sh@336 -- # IFS=.-:
00:22:22.131    06:32:39	-- scripts/common.sh@336 -- # read -ra ver2
00:22:22.131    06:32:39	-- scripts/common.sh@337 -- # local 'op=<'
00:22:22.131    06:32:39	-- scripts/common.sh@339 -- # ver1_l=2
00:22:22.131    06:32:39	-- scripts/common.sh@340 -- # ver2_l=1
00:22:22.131    06:32:39	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:22:22.131    06:32:39	-- scripts/common.sh@343 -- # case "$op" in
00:22:22.131    06:32:39	-- scripts/common.sh@344 -- # : 1
00:22:22.131    06:32:39	-- scripts/common.sh@363 -- # (( v = 0 ))
00:22:22.131    06:32:39	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:22:22.131     06:32:39	-- scripts/common.sh@364 -- # decimal 1
00:22:22.131     06:32:39	-- scripts/common.sh@352 -- # local d=1
00:22:22.131     06:32:39	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:22:22.131     06:32:39	-- scripts/common.sh@354 -- # echo 1
00:22:22.131    06:32:39	-- scripts/common.sh@364 -- # ver1[v]=1
00:22:22.131     06:32:39	-- scripts/common.sh@365 -- # decimal 2
00:22:22.131     06:32:39	-- scripts/common.sh@352 -- # local d=2
00:22:22.131     06:32:39	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:22:22.131     06:32:39	-- scripts/common.sh@354 -- # echo 2
00:22:22.131    06:32:39	-- scripts/common.sh@365 -- # ver2[v]=2
00:22:22.131    06:32:39	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:22:22.131    06:32:39	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:22:22.131    06:32:39	-- scripts/common.sh@367 -- # return 0
00:22:22.131    06:32:39	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:22:22.131    06:32:39	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:22:22.131  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:22.131  		--rc genhtml_branch_coverage=1
00:22:22.131  		--rc genhtml_function_coverage=1
00:22:22.131  		--rc genhtml_legend=1
00:22:22.131  		--rc geninfo_all_blocks=1
00:22:22.131  		--rc geninfo_unexecuted_blocks=1
00:22:22.131  		
00:22:22.131  		'
00:22:22.131    06:32:39	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:22:22.131  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:22.131  		--rc genhtml_branch_coverage=1
00:22:22.131  		--rc genhtml_function_coverage=1
00:22:22.131  		--rc genhtml_legend=1
00:22:22.131  		--rc geninfo_all_blocks=1
00:22:22.131  		--rc geninfo_unexecuted_blocks=1
00:22:22.131  		
00:22:22.131  		'
00:22:22.131    06:32:39	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:22:22.131  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:22.131  		--rc genhtml_branch_coverage=1
00:22:22.131  		--rc genhtml_function_coverage=1
00:22:22.131  		--rc genhtml_legend=1
00:22:22.131  		--rc geninfo_all_blocks=1
00:22:22.131  		--rc geninfo_unexecuted_blocks=1
00:22:22.131  		
00:22:22.131  		'
00:22:22.131    06:32:39	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:22:22.131  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:22.131  		--rc genhtml_branch_coverage=1
00:22:22.131  		--rc genhtml_function_coverage=1
00:22:22.131  		--rc genhtml_legend=1
00:22:22.131  		--rc geninfo_all_blocks=1
00:22:22.131  		--rc geninfo_unexecuted_blocks=1
00:22:22.131  		
00:22:22.131  		'
00:22:22.131   06:32:39	-- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:22:22.131     06:32:39	-- nvmf/common.sh@7 -- # uname -s
00:22:22.131    06:32:39	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:22:22.131    06:32:39	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:22:22.131    06:32:39	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:22:22.131    06:32:39	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:22:22.131    06:32:39	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:22:22.131    06:32:39	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:22:22.131    06:32:39	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:22:22.131    06:32:39	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:22:22.131    06:32:39	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:22:22.131     06:32:39	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:22:22.131    06:32:39	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:22:22.131    06:32:39	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:22:22.131    06:32:39	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:22:22.131    06:32:39	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:22:22.131    06:32:39	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:22:22.131    06:32:39	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:22:22.131     06:32:39	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:22:22.131     06:32:39	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:22:22.131     06:32:39	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:22:22.131      06:32:39	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:22.131      06:32:39	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:22.131      06:32:39	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:22.131      06:32:39	-- paths/export.sh@5 -- # export PATH
00:22:22.131      06:32:39	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:22.131    06:32:39	-- nvmf/common.sh@46 -- # : 0
00:22:22.131    06:32:39	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:22:22.131    06:32:39	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:22:22.131    06:32:39	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:22:22.131    06:32:39	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:22:22.131    06:32:39	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:22:22.131    06:32:39	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:22:22.131    06:32:39	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:22:22.131    06:32:39	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:22:22.131   06:32:39	-- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']'
00:22:22.131   06:32:39	-- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009
00:22:22.131   06:32:39	-- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery
00:22:22.131   06:32:39	-- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode
00:22:22.131   06:32:39	-- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test
00:22:22.131   06:32:39	-- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock
00:22:22.131   06:32:39	-- host/discovery_remove_ifc.sh@39 -- # nvmftestinit
00:22:22.132   06:32:39	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:22:22.132   06:32:39	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:22:22.132   06:32:39	-- nvmf/common.sh@436 -- # prepare_net_devs
00:22:22.132   06:32:39	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:22:22.132   06:32:39	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:22:22.132   06:32:39	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:22:22.132   06:32:39	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:22:22.132    06:32:39	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:22:22.132   06:32:39	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:22:22.132   06:32:39	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:22:22.132   06:32:39	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:22:22.132   06:32:39	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:22:22.132   06:32:39	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:22:22.132   06:32:39	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:22:22.132   06:32:39	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:22:22.132   06:32:39	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:22:22.132   06:32:39	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:22:22.132   06:32:39	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:22:22.132   06:32:39	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:22:22.132   06:32:39	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:22:22.132   06:32:39	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:22:22.132   06:32:39	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:22:22.132   06:32:39	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:22:22.132   06:32:39	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:22:22.132   06:32:39	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:22:22.132   06:32:39	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:22:22.132   06:32:39	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:22:22.132   06:32:39	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:22:22.132  Cannot find device "nvmf_tgt_br"
00:22:22.132   06:32:39	-- nvmf/common.sh@154 -- # true
00:22:22.132   06:32:39	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:22:22.132  Cannot find device "nvmf_tgt_br2"
00:22:22.132   06:32:39	-- nvmf/common.sh@155 -- # true
00:22:22.132   06:32:39	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:22:22.390   06:32:39	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:22:22.390  Cannot find device "nvmf_tgt_br"
00:22:22.390   06:32:39	-- nvmf/common.sh@157 -- # true
00:22:22.390   06:32:39	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:22:22.390  Cannot find device "nvmf_tgt_br2"
00:22:22.390   06:32:39	-- nvmf/common.sh@158 -- # true
00:22:22.390   06:32:39	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:22:22.390   06:32:39	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:22:22.390   06:32:39	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:22:22.390  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:22:22.390   06:32:39	-- nvmf/common.sh@161 -- # true
00:22:22.390   06:32:39	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:22:22.390  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:22:22.390   06:32:39	-- nvmf/common.sh@162 -- # true
00:22:22.390   06:32:39	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:22:22.390   06:32:39	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:22:22.390   06:32:39	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:22:22.390   06:32:39	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:22:22.390   06:32:39	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:22:22.390   06:32:39	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:22:22.390   06:32:39	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:22:22.391   06:32:39	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:22:22.391   06:32:39	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:22:22.391   06:32:39	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:22:22.391   06:32:39	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:22:22.391   06:32:39	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:22:22.391   06:32:39	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:22:22.391   06:32:39	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:22:22.391   06:32:39	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:22:22.391   06:32:39	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:22:22.391   06:32:39	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:22:22.391   06:32:39	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:22:22.391   06:32:39	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:22:22.391   06:32:39	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:22:22.391   06:32:39	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:22:22.391   06:32:39	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:22:22.649   06:32:39	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:22:22.649   06:32:39	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:22:22.649  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:22:22.649  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms
00:22:22.649  
00:22:22.649  --- 10.0.0.2 ping statistics ---
00:22:22.649  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:22.649  rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms
00:22:22.649   06:32:39	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:22:22.649  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:22:22.649  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms
00:22:22.649  
00:22:22.649  --- 10.0.0.3 ping statistics ---
00:22:22.649  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:22.649  rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms
00:22:22.649   06:32:39	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:22:22.649  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:22:22.649  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms
00:22:22.649  
00:22:22.649  --- 10.0.0.1 ping statistics ---
00:22:22.649  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:22.649  rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms
00:22:22.649   06:32:39	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:22:22.649   06:32:39	-- nvmf/common.sh@421 -- # return 0
00:22:22.649   06:32:39	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:22:22.649   06:32:39	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:22:22.649   06:32:39	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:22:22.649   06:32:39	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:22:22.649   06:32:39	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:22:22.650   06:32:39	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:22:22.650   06:32:39	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:22:22.650   06:32:39	-- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2
00:22:22.650   06:32:39	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:22:22.650   06:32:39	-- common/autotest_common.sh@722 -- # xtrace_disable
00:22:22.650   06:32:39	-- common/autotest_common.sh@10 -- # set +x
00:22:22.650   06:32:39	-- nvmf/common.sh@469 -- # nvmfpid=86036
00:22:22.650   06:32:39	-- nvmf/common.sh@470 -- # waitforlisten 86036
00:22:22.650   06:32:39	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:22:22.650   06:32:39	-- common/autotest_common.sh@829 -- # '[' -z 86036 ']'
00:22:22.650   06:32:39	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:22:22.650   06:32:39	-- common/autotest_common.sh@834 -- # local max_retries=100
00:22:22.650   06:32:39	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:22:22.650  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:22:22.650   06:32:39	-- common/autotest_common.sh@838 -- # xtrace_disable
00:22:22.650   06:32:39	-- common/autotest_common.sh@10 -- # set +x
00:22:22.650  [2024-12-16 06:32:39.472753] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:22:22.650  [2024-12-16 06:32:39.472849] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:22:22.650  [2024-12-16 06:32:39.614853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:22.908  [2024-12-16 06:32:39.751468] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:22:22.908  [2024-12-16 06:32:39.751674] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:22:22.908  [2024-12-16 06:32:39.751692] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:22:22.908  [2024-12-16 06:32:39.751704] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:22:22.908  [2024-12-16 06:32:39.751748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:22:23.844   06:32:40	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:22:23.844   06:32:40	-- common/autotest_common.sh@862 -- # return 0
00:22:23.844   06:32:40	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:22:23.844   06:32:40	-- common/autotest_common.sh@728 -- # xtrace_disable
00:22:23.844   06:32:40	-- common/autotest_common.sh@10 -- # set +x
00:22:23.844   06:32:40	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:22:23.844   06:32:40	-- host/discovery_remove_ifc.sh@43 -- # rpc_cmd
00:22:23.844   06:32:40	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:23.844   06:32:40	-- common/autotest_common.sh@10 -- # set +x
00:22:23.844  [2024-12-16 06:32:40.555025] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:22:23.844  [2024-12-16 06:32:40.563193] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 ***
00:22:23.844  null0
00:22:23.844  [2024-12-16 06:32:40.595087] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:22:23.844   06:32:40	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:23.844   06:32:40	-- host/discovery_remove_ifc.sh@59 -- # hostpid=86086
00:22:23.844   06:32:40	-- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme
00:22:23.844   06:32:40	-- host/discovery_remove_ifc.sh@60 -- # waitforlisten 86086 /tmp/host.sock
00:22:23.844   06:32:40	-- common/autotest_common.sh@829 -- # '[' -z 86086 ']'
00:22:23.844   06:32:40	-- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock
00:22:23.844   06:32:40	-- common/autotest_common.sh@834 -- # local max_retries=100
00:22:23.844   06:32:40	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...'
00:22:23.844  Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...
00:22:23.844   06:32:40	-- common/autotest_common.sh@838 -- # xtrace_disable
00:22:23.844   06:32:40	-- common/autotest_common.sh@10 -- # set +x
00:22:23.844  [2024-12-16 06:32:40.680794] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:22:23.844  [2024-12-16 06:32:40.680897] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86086 ]
00:22:24.102  [2024-12-16 06:32:40.819433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:24.102  [2024-12-16 06:32:40.932620] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:22:24.102  [2024-12-16 06:32:40.932829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:22:24.669   06:32:41	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:22:24.669   06:32:41	-- common/autotest_common.sh@862 -- # return 0
00:22:24.669   06:32:41	-- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:22:24.669   06:32:41	-- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1
00:22:24.669   06:32:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:24.669   06:32:41	-- common/autotest_common.sh@10 -- # set +x
00:22:24.669   06:32:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:24.669   06:32:41	-- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init
00:22:24.669   06:32:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:24.669   06:32:41	-- common/autotest_common.sh@10 -- # set +x
00:22:24.928   06:32:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:24.928   06:32:41	-- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach
00:22:24.928   06:32:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:24.928   06:32:41	-- common/autotest_common.sh@10 -- # set +x
00:22:25.863  [2024-12-16 06:32:42.718081] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached
00:22:25.863  [2024-12-16 06:32:42.718111] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected
00:22:25.863  [2024-12-16 06:32:42.718129] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command
00:22:25.863  [2024-12-16 06:32:42.804180] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0
00:22:26.122  [2024-12-16 06:32:42.859736] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0
00:22:26.122  [2024-12-16 06:32:42.859783] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0
00:22:26.122  [2024-12-16 06:32:42.859810] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0
00:22:26.122  [2024-12-16 06:32:42.859825] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done
00:22:26.122  [2024-12-16 06:32:42.859842] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again
00:22:26.122   06:32:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:26.122   06:32:42	-- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1
00:22:26.122    06:32:42	-- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:22:26.122    06:32:42	-- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:26.122    06:32:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:26.122    06:32:42	-- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:22:26.122    06:32:42	-- common/autotest_common.sh@10 -- # set +x
00:22:26.122    06:32:42	-- host/discovery_remove_ifc.sh@29 -- # sort
00:22:26.122    06:32:42	-- host/discovery_remove_ifc.sh@29 -- # xargs
00:22:26.122  [2024-12-16 06:32:42.866671] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x71a840 was disconnected and freed. delete nvme_qpair.
00:22:26.122    06:32:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:26.122   06:32:42	-- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]]
00:22:26.122   06:32:42	-- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if
00:22:26.122   06:32:42	-- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down
00:22:26.122   06:32:42	-- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev ''
00:22:26.122    06:32:42	-- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:22:26.122    06:32:42	-- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:26.122    06:32:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:26.122    06:32:42	-- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:22:26.122    06:32:42	-- common/autotest_common.sh@10 -- # set +x
00:22:26.122    06:32:42	-- host/discovery_remove_ifc.sh@29 -- # sort
00:22:26.122    06:32:42	-- host/discovery_remove_ifc.sh@29 -- # xargs
00:22:26.122    06:32:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:26.122   06:32:42	-- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:22:26.122   06:32:42	-- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:22:27.058    06:32:43	-- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:22:27.058    06:32:43	-- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:27.058    06:32:43	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:27.058    06:32:43	-- common/autotest_common.sh@10 -- # set +x
00:22:27.058    06:32:43	-- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:22:27.058    06:32:43	-- host/discovery_remove_ifc.sh@29 -- # xargs
00:22:27.058    06:32:43	-- host/discovery_remove_ifc.sh@29 -- # sort
00:22:27.058    06:32:44	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:27.316   06:32:44	-- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:22:27.316   06:32:44	-- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:22:28.252    06:32:45	-- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:22:28.252    06:32:45	-- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:28.252    06:32:45	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:28.252    06:32:45	-- common/autotest_common.sh@10 -- # set +x
00:22:28.252    06:32:45	-- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:22:28.252    06:32:45	-- host/discovery_remove_ifc.sh@29 -- # sort
00:22:28.252    06:32:45	-- host/discovery_remove_ifc.sh@29 -- # xargs
00:22:28.252    06:32:45	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:28.252   06:32:45	-- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:22:28.252   06:32:45	-- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:22:29.188    06:32:46	-- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:22:29.188    06:32:46	-- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:29.188    06:32:46	-- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:22:29.188    06:32:46	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:29.188    06:32:46	-- common/autotest_common.sh@10 -- # set +x
00:22:29.188    06:32:46	-- host/discovery_remove_ifc.sh@29 -- # sort
00:22:29.188    06:32:46	-- host/discovery_remove_ifc.sh@29 -- # xargs
00:22:29.188    06:32:46	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:29.447   06:32:46	-- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:22:29.447   06:32:46	-- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:22:30.383    06:32:47	-- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:22:30.383    06:32:47	-- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:30.383    06:32:47	-- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:22:30.383    06:32:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:30.383    06:32:47	-- common/autotest_common.sh@10 -- # set +x
00:22:30.383    06:32:47	-- host/discovery_remove_ifc.sh@29 -- # sort
00:22:30.383    06:32:47	-- host/discovery_remove_ifc.sh@29 -- # xargs
00:22:30.383    06:32:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:30.383   06:32:47	-- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:22:30.383   06:32:47	-- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:22:31.319    06:32:48	-- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:22:31.319    06:32:48	-- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:31.319    06:32:48	-- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:22:31.319    06:32:48	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:31.319    06:32:48	-- host/discovery_remove_ifc.sh@29 -- # xargs
00:22:31.319    06:32:48	-- common/autotest_common.sh@10 -- # set +x
00:22:31.319    06:32:48	-- host/discovery_remove_ifc.sh@29 -- # sort
00:22:31.319    06:32:48	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:31.319  [2024-12-16 06:32:48.290409] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out
00:22:31.319  [2024-12-16 06:32:48.290530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:31.319   06:32:48	-- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:22:31.319  [2024-12-16 06:32:48.290548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:31.319  [2024-12-16 06:32:48.290560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:31.319  [2024-12-16 06:32:48.290570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:31.319  [2024-12-16 06:32:48.290579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:31.319  [2024-12-16 06:32:48.290588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:31.319  [2024-12-16 06:32:48.290597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:31.319  [2024-12-16 06:32:48.290606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:31.319  [2024-12-16 06:32:48.290615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:31.319  [2024-12-16 06:32:48.290624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:31.319  [2024-12-16 06:32:48.290633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6919f0 is same with the state(5) to be set
00:22:31.319   06:32:48	-- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:22:31.577  [2024-12-16 06:32:48.300405] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6919f0 (9): Bad file descriptor
00:22:31.577  [2024-12-16 06:32:48.310423] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller
00:22:32.513    06:32:49	-- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:22:32.513    06:32:49	-- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:32.513    06:32:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:32.513    06:32:49	-- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:22:32.513    06:32:49	-- common/autotest_common.sh@10 -- # set +x
00:22:32.513    06:32:49	-- host/discovery_remove_ifc.sh@29 -- # sort
00:22:32.513    06:32:49	-- host/discovery_remove_ifc.sh@29 -- # xargs
00:22:32.513  [2024-12-16 06:32:49.317615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110
00:22:33.449  [2024-12-16 06:32:50.341607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110
00:22:33.449  [2024-12-16 06:32:50.341699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6919f0 with addr=10.0.0.2, port=4420
00:22:33.449  [2024-12-16 06:32:50.341729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6919f0 is same with the state(5) to be set
00:22:33.449  [2024-12-16 06:32:50.341775] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state.
00:22:33.449  [2024-12-16 06:32:50.341797] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state
00:22:33.449  [2024-12-16 06:32:50.341816] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed
00:22:33.449  [2024-12-16 06:32:50.341837] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state
00:22:33.449  [2024-12-16 06:32:50.342652] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6919f0 (9): Bad file descriptor
00:22:33.449  [2024-12-16 06:32:50.342726] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:22:33.449  [2024-12-16 06:32:50.342779] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420
00:22:33.449  [2024-12-16 06:32:50.342846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:33.449  [2024-12-16 06:32:50.342876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:33.449  [2024-12-16 06:32:50.342907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:33.449  [2024-12-16 06:32:50.342927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:33.449  [2024-12-16 06:32:50.342948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:33.449  [2024-12-16 06:32:50.342970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:33.449  [2024-12-16 06:32:50.342992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:33.449  [2024-12-16 06:32:50.343011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:33.449  [2024-12-16 06:32:50.343033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 
00:22:33.449  [2024-12-16 06:32:50.343054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:22:33.449  [2024-12-16 06:32:50.343073] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state.
00:22:33.449  [2024-12-16 06:32:50.343132] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x691e00 (9): Bad file descriptor
00:22:33.449  [2024-12-16 06:32:50.344135] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command
00:22:33.449  [2024-12-16 06:32:50.344189] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register
00:22:33.449    06:32:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:33.449   06:32:50	-- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:22:33.449   06:32:50	-- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:22:34.826    06:32:51	-- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:22:34.826    06:32:51	-- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:34.826    06:32:51	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:34.826    06:32:51	-- common/autotest_common.sh@10 -- # set +x
00:22:34.826    06:32:51	-- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:22:34.826    06:32:51	-- host/discovery_remove_ifc.sh@29 -- # sort
00:22:34.826    06:32:51	-- host/discovery_remove_ifc.sh@29 -- # xargs
00:22:34.826    06:32:51	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:34.826   06:32:51	-- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]]
00:22:34.826   06:32:51	-- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:22:34.826   06:32:51	-- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:22:34.826   06:32:51	-- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1
00:22:34.826    06:32:51	-- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:22:34.826    06:32:51	-- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:34.826    06:32:51	-- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:22:34.826    06:32:51	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:34.826    06:32:51	-- common/autotest_common.sh@10 -- # set +x
00:22:34.826    06:32:51	-- host/discovery_remove_ifc.sh@29 -- # sort
00:22:34.826    06:32:51	-- host/discovery_remove_ifc.sh@29 -- # xargs
00:22:34.826    06:32:51	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:34.826   06:32:51	-- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]]
00:22:34.826   06:32:51	-- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:22:35.393  [2024-12-16 06:32:52.349530] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached
00:22:35.393  [2024-12-16 06:32:52.349550] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected
00:22:35.393  [2024-12-16 06:32:52.349566] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command
00:22:35.652  [2024-12-16 06:32:52.435634] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1
00:22:35.652  [2024-12-16 06:32:52.490327] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0
00:22:35.652  [2024-12-16 06:32:52.490367] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0
00:22:35.652  [2024-12-16 06:32:52.490387] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0
00:22:35.652  [2024-12-16 06:32:52.490400] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done
00:22:35.653  [2024-12-16 06:32:52.490407] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again
00:22:35.653  [2024-12-16 06:32:52.498034] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6d5080 was disconnected and freed. delete nvme_qpair.
00:22:35.653    06:32:52	-- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:22:35.653    06:32:52	-- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:22:35.653    06:32:52	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:35.653    06:32:52	-- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:22:35.653    06:32:52	-- host/discovery_remove_ifc.sh@29 -- # sort
00:22:35.653    06:32:52	-- common/autotest_common.sh@10 -- # set +x
00:22:35.653    06:32:52	-- host/discovery_remove_ifc.sh@29 -- # xargs
00:22:35.653    06:32:52	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:35.653   06:32:52	-- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]]
00:22:35.653   06:32:52	-- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT
00:22:35.653   06:32:52	-- host/discovery_remove_ifc.sh@90 -- # killprocess 86086
00:22:35.653   06:32:52	-- common/autotest_common.sh@936 -- # '[' -z 86086 ']'
00:22:35.653   06:32:52	-- common/autotest_common.sh@940 -- # kill -0 86086
00:22:35.653    06:32:52	-- common/autotest_common.sh@941 -- # uname
00:22:35.653   06:32:52	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:22:35.653    06:32:52	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86086
00:22:35.653   06:32:52	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:22:35.653  killing process with pid 86086
00:22:35.653   06:32:52	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:22:35.653   06:32:52	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 86086'
00:22:35.653   06:32:52	-- common/autotest_common.sh@955 -- # kill 86086
00:22:35.653   06:32:52	-- common/autotest_common.sh@960 -- # wait 86086
00:22:35.912   06:32:52	-- host/discovery_remove_ifc.sh@91 -- # nvmftestfini
00:22:35.912   06:32:52	-- nvmf/common.sh@476 -- # nvmfcleanup
00:22:35.912   06:32:52	-- nvmf/common.sh@116 -- # sync
00:22:35.912   06:32:52	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:22:35.912   06:32:52	-- nvmf/common.sh@119 -- # set +e
00:22:35.912   06:32:52	-- nvmf/common.sh@120 -- # for i in {1..20}
00:22:35.912   06:32:52	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:22:35.912  rmmod nvme_tcp
00:22:36.170  rmmod nvme_fabrics
00:22:36.170  rmmod nvme_keyring
00:22:36.170   06:32:52	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:22:36.170   06:32:52	-- nvmf/common.sh@123 -- # set -e
00:22:36.170   06:32:52	-- nvmf/common.sh@124 -- # return 0
00:22:36.170   06:32:52	-- nvmf/common.sh@477 -- # '[' -n 86036 ']'
00:22:36.170   06:32:52	-- nvmf/common.sh@478 -- # killprocess 86036
00:22:36.170   06:32:52	-- common/autotest_common.sh@936 -- # '[' -z 86036 ']'
00:22:36.170   06:32:52	-- common/autotest_common.sh@940 -- # kill -0 86036
00:22:36.170    06:32:52	-- common/autotest_common.sh@941 -- # uname
00:22:36.170   06:32:52	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:22:36.170    06:32:52	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86036
00:22:36.170  killing process with pid 86036
00:22:36.170   06:32:52	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:22:36.170   06:32:52	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:22:36.170   06:32:52	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 86036'
00:22:36.170   06:32:52	-- common/autotest_common.sh@955 -- # kill 86036
00:22:36.170   06:32:52	-- common/autotest_common.sh@960 -- # wait 86036
00:22:36.429   06:32:53	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:22:36.429   06:32:53	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:22:36.429   06:32:53	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:22:36.429   06:32:53	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:22:36.429   06:32:53	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:22:36.429   06:32:53	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:22:36.429   06:32:53	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:22:36.429    06:32:53	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:22:36.429   06:32:53	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:22:36.429  
00:22:36.429  real	0m14.489s
00:22:36.429  user	0m24.693s
00:22:36.429  sys	0m1.635s
00:22:36.429   06:32:53	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:22:36.429   06:32:53	-- common/autotest_common.sh@10 -- # set +x
00:22:36.429  ************************************
00:22:36.429  END TEST nvmf_discovery_remove_ifc
00:22:36.429  ************************************
00:22:36.429   06:32:53	-- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]]
00:22:36.429   06:32:53	-- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp
00:22:36.429   06:32:53	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:22:36.429   06:32:53	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:22:36.429   06:32:53	-- common/autotest_common.sh@10 -- # set +x
00:22:36.429  ************************************
00:22:36.429  START TEST nvmf_digest
00:22:36.429  ************************************
00:22:36.429   06:32:53	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp
00:22:36.688  * Looking for test storage...
00:22:36.688  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:22:36.688    06:32:53	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:22:36.688     06:32:53	-- common/autotest_common.sh@1690 -- # lcov --version
00:22:36.688     06:32:53	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:22:36.688    06:32:53	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:22:36.688    06:32:53	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:22:36.688    06:32:53	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:22:36.688    06:32:53	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:22:36.688    06:32:53	-- scripts/common.sh@335 -- # IFS=.-:
00:22:36.688    06:32:53	-- scripts/common.sh@335 -- # read -ra ver1
00:22:36.688    06:32:53	-- scripts/common.sh@336 -- # IFS=.-:
00:22:36.688    06:32:53	-- scripts/common.sh@336 -- # read -ra ver2
00:22:36.688    06:32:53	-- scripts/common.sh@337 -- # local 'op=<'
00:22:36.688    06:32:53	-- scripts/common.sh@339 -- # ver1_l=2
00:22:36.688    06:32:53	-- scripts/common.sh@340 -- # ver2_l=1
00:22:36.688    06:32:53	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:22:36.688    06:32:53	-- scripts/common.sh@343 -- # case "$op" in
00:22:36.688    06:32:53	-- scripts/common.sh@344 -- # : 1
00:22:36.688    06:32:53	-- scripts/common.sh@363 -- # (( v = 0 ))
00:22:36.688    06:32:53	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:22:36.688     06:32:53	-- scripts/common.sh@364 -- # decimal 1
00:22:36.688     06:32:53	-- scripts/common.sh@352 -- # local d=1
00:22:36.688     06:32:53	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:22:36.688     06:32:53	-- scripts/common.sh@354 -- # echo 1
00:22:36.688    06:32:53	-- scripts/common.sh@364 -- # ver1[v]=1
00:22:36.688     06:32:53	-- scripts/common.sh@365 -- # decimal 2
00:22:36.688     06:32:53	-- scripts/common.sh@352 -- # local d=2
00:22:36.688     06:32:53	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:22:36.688     06:32:53	-- scripts/common.sh@354 -- # echo 2
00:22:36.688    06:32:53	-- scripts/common.sh@365 -- # ver2[v]=2
00:22:36.688    06:32:53	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:22:36.688    06:32:53	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:22:36.688    06:32:53	-- scripts/common.sh@367 -- # return 0
00:22:36.688    06:32:53	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:22:36.688    06:32:53	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:22:36.688  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:36.688  		--rc genhtml_branch_coverage=1
00:22:36.688  		--rc genhtml_function_coverage=1
00:22:36.688  		--rc genhtml_legend=1
00:22:36.688  		--rc geninfo_all_blocks=1
00:22:36.688  		--rc geninfo_unexecuted_blocks=1
00:22:36.688  		
00:22:36.688  		'
00:22:36.688    06:32:53	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:22:36.688  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:36.688  		--rc genhtml_branch_coverage=1
00:22:36.688  		--rc genhtml_function_coverage=1
00:22:36.688  		--rc genhtml_legend=1
00:22:36.688  		--rc geninfo_all_blocks=1
00:22:36.688  		--rc geninfo_unexecuted_blocks=1
00:22:36.688  		
00:22:36.688  		'
00:22:36.688    06:32:53	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:22:36.688  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:36.688  		--rc genhtml_branch_coverage=1
00:22:36.688  		--rc genhtml_function_coverage=1
00:22:36.688  		--rc genhtml_legend=1
00:22:36.688  		--rc geninfo_all_blocks=1
00:22:36.688  		--rc geninfo_unexecuted_blocks=1
00:22:36.688  		
00:22:36.688  		'
00:22:36.688    06:32:53	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:22:36.688  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:36.688  		--rc genhtml_branch_coverage=1
00:22:36.688  		--rc genhtml_function_coverage=1
00:22:36.688  		--rc genhtml_legend=1
00:22:36.688  		--rc geninfo_all_blocks=1
00:22:36.688  		--rc geninfo_unexecuted_blocks=1
00:22:36.688  		
00:22:36.688  		'
00:22:36.688   06:32:53	-- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:22:36.688     06:32:53	-- nvmf/common.sh@7 -- # uname -s
00:22:36.688    06:32:53	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:22:36.688    06:32:53	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:22:36.688    06:32:53	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:22:36.688    06:32:53	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:22:36.688    06:32:53	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:22:36.688    06:32:53	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:22:36.688    06:32:53	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:22:36.688    06:32:53	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:22:36.688    06:32:53	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:22:36.688     06:32:53	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:22:36.688    06:32:53	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:22:36.688    06:32:53	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:22:36.688    06:32:53	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:22:36.688    06:32:53	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:22:36.688    06:32:53	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:22:36.688    06:32:53	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:22:36.688     06:32:53	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:22:36.688     06:32:53	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:22:36.688     06:32:53	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:22:36.688      06:32:53	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:36.689      06:32:53	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:36.689      06:32:53	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:36.689      06:32:53	-- paths/export.sh@5 -- # export PATH
00:22:36.689      06:32:53	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:36.689    06:32:53	-- nvmf/common.sh@46 -- # : 0
00:22:36.689    06:32:53	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:22:36.689    06:32:53	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:22:36.689    06:32:53	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:22:36.689    06:32:53	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:22:36.689    06:32:53	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:22:36.689    06:32:53	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:22:36.689    06:32:53	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:22:36.689    06:32:53	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:22:36.689   06:32:53	-- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1
00:22:36.689   06:32:53	-- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock
00:22:36.689   06:32:53	-- host/digest.sh@16 -- # runtime=2
00:22:36.689   06:32:53	-- host/digest.sh@130 -- # [[ tcp != \t\c\p ]]
00:22:36.689   06:32:53	-- host/digest.sh@132 -- # nvmftestinit
00:22:36.689   06:32:53	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:22:36.689   06:32:53	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:22:36.689   06:32:53	-- nvmf/common.sh@436 -- # prepare_net_devs
00:22:36.689   06:32:53	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:22:36.689   06:32:53	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:22:36.689   06:32:53	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:22:36.689   06:32:53	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:22:36.689    06:32:53	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:22:36.689   06:32:53	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:22:36.689   06:32:53	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:22:36.689   06:32:53	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:22:36.689   06:32:53	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:22:36.689   06:32:53	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:22:36.689   06:32:53	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:22:36.689   06:32:53	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:22:36.689   06:32:53	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:22:36.689   06:32:53	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:22:36.689   06:32:53	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:22:36.689   06:32:53	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:22:36.689   06:32:53	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:22:36.689   06:32:53	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:22:36.689   06:32:53	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:22:36.689   06:32:53	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:22:36.689   06:32:53	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:22:36.689   06:32:53	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:22:36.689   06:32:53	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:22:36.689   06:32:53	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:22:36.689   06:32:53	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:22:36.689  Cannot find device "nvmf_tgt_br"
00:22:36.689   06:32:53	-- nvmf/common.sh@154 -- # true
00:22:36.689   06:32:53	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:22:36.948  Cannot find device "nvmf_tgt_br2"
00:22:36.948   06:32:53	-- nvmf/common.sh@155 -- # true
00:22:36.948   06:32:53	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:22:36.948   06:32:53	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:22:36.948  Cannot find device "nvmf_tgt_br"
00:22:36.948   06:32:53	-- nvmf/common.sh@157 -- # true
00:22:36.948   06:32:53	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:22:36.948  Cannot find device "nvmf_tgt_br2"
00:22:36.948   06:32:53	-- nvmf/common.sh@158 -- # true
00:22:36.948   06:32:53	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:22:36.948   06:32:53	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:22:36.948   06:32:53	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:22:36.948  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:22:36.948   06:32:53	-- nvmf/common.sh@161 -- # true
00:22:36.948   06:32:53	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:22:36.948  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:22:36.948   06:32:53	-- nvmf/common.sh@162 -- # true
00:22:36.948   06:32:53	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:22:36.948   06:32:53	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:22:36.948   06:32:53	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:22:36.948   06:32:53	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:22:36.948   06:32:53	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:22:36.948   06:32:53	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:22:36.948   06:32:53	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:22:36.948   06:32:53	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:22:36.948   06:32:53	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:22:36.948   06:32:53	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:22:36.948   06:32:53	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:22:36.948   06:32:53	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:22:36.948   06:32:53	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:22:36.948   06:32:53	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:22:36.948   06:32:53	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:22:36.948   06:32:53	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:22:36.948   06:32:53	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:22:36.948   06:32:53	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:22:36.948   06:32:53	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:22:37.225   06:32:53	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:22:37.225   06:32:53	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:22:37.225   06:32:53	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:22:37.225   06:32:53	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:22:37.225   06:32:53	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:22:37.225  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:22:37.225  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms
00:22:37.225  
00:22:37.225  --- 10.0.0.2 ping statistics ---
00:22:37.225  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:37.225  rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms
00:22:37.225   06:32:53	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:22:37.225  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:22:37.225  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms
00:22:37.225  
00:22:37.225  --- 10.0.0.3 ping statistics ---
00:22:37.225  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:37.225  rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms
00:22:37.225   06:32:53	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:22:37.225  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:22:37.225  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms
00:22:37.225  
00:22:37.225  --- 10.0.0.1 ping statistics ---
00:22:37.225  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:37.225  rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms
00:22:37.225   06:32:53	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:22:37.225   06:32:53	-- nvmf/common.sh@421 -- # return 0
00:22:37.225   06:32:53	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:22:37.225   06:32:53	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:22:37.225   06:32:53	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:22:37.225   06:32:53	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:22:37.225   06:32:53	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:22:37.225   06:32:53	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:22:37.225   06:32:53	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:22:37.225   06:32:54	-- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT
00:22:37.225   06:32:54	-- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest
00:22:37.225   06:32:54	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:22:37.225   06:32:54	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:22:37.225   06:32:54	-- common/autotest_common.sh@10 -- # set +x
00:22:37.225  ************************************
00:22:37.225  START TEST nvmf_digest_clean
00:22:37.225  ************************************
00:22:37.225   06:32:54	-- common/autotest_common.sh@1114 -- # run_digest
00:22:37.225   06:32:54	-- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc
00:22:37.225   06:32:54	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:22:37.225   06:32:54	-- common/autotest_common.sh@722 -- # xtrace_disable
00:22:37.225   06:32:54	-- common/autotest_common.sh@10 -- # set +x
00:22:37.225  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:22:37.225   06:32:54	-- nvmf/common.sh@469 -- # nvmfpid=86513
00:22:37.225   06:32:54	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc
00:22:37.225   06:32:54	-- nvmf/common.sh@470 -- # waitforlisten 86513
00:22:37.225   06:32:54	-- common/autotest_common.sh@829 -- # '[' -z 86513 ']'
00:22:37.225   06:32:54	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:22:37.225   06:32:54	-- common/autotest_common.sh@834 -- # local max_retries=100
00:22:37.225   06:32:54	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:22:37.225   06:32:54	-- common/autotest_common.sh@838 -- # xtrace_disable
00:22:37.225   06:32:54	-- common/autotest_common.sh@10 -- # set +x
00:22:37.225  [2024-12-16 06:32:54.077702] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:22:37.225  [2024-12-16 06:32:54.077785] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:22:37.495  [2024-12-16 06:32:54.218522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:37.495  [2024-12-16 06:32:54.324539] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:22:37.495  [2024-12-16 06:32:54.324708] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:22:37.495  [2024-12-16 06:32:54.324725] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:22:37.495  [2024-12-16 06:32:54.324736] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:22:37.495  [2024-12-16 06:32:54.324774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:22:38.430   06:32:55	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:22:38.430   06:32:55	-- common/autotest_common.sh@862 -- # return 0
00:22:38.430   06:32:55	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:22:38.430   06:32:55	-- common/autotest_common.sh@728 -- # xtrace_disable
00:22:38.430   06:32:55	-- common/autotest_common.sh@10 -- # set +x
00:22:38.430   06:32:55	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:22:38.430   06:32:55	-- host/digest.sh@120 -- # common_target_config
00:22:38.430   06:32:55	-- host/digest.sh@43 -- # rpc_cmd
00:22:38.430   06:32:55	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:38.430   06:32:55	-- common/autotest_common.sh@10 -- # set +x
00:22:38.430  null0
00:22:38.430  [2024-12-16 06:32:55.209040] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:22:38.430  [2024-12-16 06:32:55.233180] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:22:38.430   06:32:55	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:38.430   06:32:55	-- host/digest.sh@122 -- # run_bperf randread 4096 128
00:22:38.430   06:32:55	-- host/digest.sh@77 -- # local rw bs qd
00:22:38.430   06:32:55	-- host/digest.sh@78 -- # local acc_module acc_executed exp_module
00:22:38.430   06:32:55	-- host/digest.sh@80 -- # rw=randread
00:22:38.430   06:32:55	-- host/digest.sh@80 -- # bs=4096
00:22:38.430   06:32:55	-- host/digest.sh@80 -- # qd=128
00:22:38.430   06:32:55	-- host/digest.sh@82 -- # bperfpid=86563
00:22:38.430   06:32:55	-- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc
00:22:38.430   06:32:55	-- host/digest.sh@83 -- # waitforlisten 86563 /var/tmp/bperf.sock
00:22:38.430   06:32:55	-- common/autotest_common.sh@829 -- # '[' -z 86563 ']'
00:22:38.430   06:32:55	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock
00:22:38.430   06:32:55	-- common/autotest_common.sh@834 -- # local max_retries=100
00:22:38.430   06:32:55	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:22:38.430  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:22:38.430   06:32:55	-- common/autotest_common.sh@838 -- # xtrace_disable
00:22:38.430   06:32:55	-- common/autotest_common.sh@10 -- # set +x
00:22:38.430  [2024-12-16 06:32:55.300215] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:22:38.431  [2024-12-16 06:32:55.300563] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86563 ]
00:22:38.688  [2024-12-16 06:32:55.441326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:38.688  [2024-12-16 06:32:55.561398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:22:39.256   06:32:56	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:22:39.256   06:32:56	-- common/autotest_common.sh@862 -- # return 0
00:22:39.256   06:32:56	-- host/digest.sh@85 -- # [[ 0 -eq 1 ]]
00:22:39.256   06:32:56	-- host/digest.sh@86 -- # bperf_rpc framework_start_init
00:22:39.256   06:32:56	-- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init
00:22:39.823   06:32:56	-- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:22:39.823   06:32:56	-- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:22:40.081  nvme0n1
00:22:40.082   06:32:56	-- host/digest.sh@91 -- # bperf_py perform_tests
00:22:40.082   06:32:56	-- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:22:40.082  Running I/O for 2 seconds...
00:22:41.984  
00:22:41.984                                                                                                  Latency(us)
00:22:41.984  
[2024-12-16T06:32:58.960Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:41.984  
[2024-12-16T06:32:58.960Z]  Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096)
00:22:41.984  	 nvme0n1             :       2.00   23952.15      93.56       0.00     0.00    5338.96    2338.44   12690.15
00:22:41.984  
[2024-12-16T06:32:58.960Z]  ===================================================================================================================
00:22:41.984  
[2024-12-16T06:32:58.960Z]  Total                       :              23952.15      93.56       0.00     0.00    5338.96    2338.44   12690.15
00:22:41.984  0
00:22:42.243   06:32:58	-- host/digest.sh@92 -- # read -r acc_module acc_executed
00:22:42.243    06:32:58	-- host/digest.sh@92 -- # get_accel_stats
00:22:42.243    06:32:58	-- host/digest.sh@36 -- # bperf_rpc accel_get_stats
00:22:42.243    06:32:58	-- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats
00:22:42.243    06:32:58	-- host/digest.sh@37 -- # jq -rc '.operations[]
00:22:42.243  			| select(.opcode=="crc32c")
00:22:42.243  			| "\(.module_name) \(.executed)"'
00:22:42.243   06:32:59	-- host/digest.sh@93 -- # [[ 0 -eq 1 ]]
00:22:42.243   06:32:59	-- host/digest.sh@93 -- # exp_module=software
00:22:42.501   06:32:59	-- host/digest.sh@94 -- # (( acc_executed > 0 ))
00:22:42.502   06:32:59	-- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:22:42.502   06:32:59	-- host/digest.sh@97 -- # killprocess 86563
00:22:42.502   06:32:59	-- common/autotest_common.sh@936 -- # '[' -z 86563 ']'
00:22:42.502   06:32:59	-- common/autotest_common.sh@940 -- # kill -0 86563
00:22:42.502    06:32:59	-- common/autotest_common.sh@941 -- # uname
00:22:42.502   06:32:59	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:22:42.502    06:32:59	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86563
00:22:42.502   06:32:59	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:22:42.502   06:32:59	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:22:42.502  killing process with pid 86563
00:22:42.502   06:32:59	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 86563'
00:22:42.502   06:32:59	-- common/autotest_common.sh@955 -- # kill 86563
00:22:42.502  Received shutdown signal, test time was about 2.000000 seconds
00:22:42.502  
00:22:42.502                                                                                                  Latency(us)
00:22:42.502  
[2024-12-16T06:32:59.478Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:42.502  
[2024-12-16T06:32:59.478Z]  ===================================================================================================================
00:22:42.502  
[2024-12-16T06:32:59.478Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:22:42.502   06:32:59	-- common/autotest_common.sh@960 -- # wait 86563
00:22:42.760   06:32:59	-- host/digest.sh@123 -- # run_bperf randread 131072 16
00:22:42.760   06:32:59	-- host/digest.sh@77 -- # local rw bs qd
00:22:42.760   06:32:59	-- host/digest.sh@78 -- # local acc_module acc_executed exp_module
00:22:42.760   06:32:59	-- host/digest.sh@80 -- # rw=randread
00:22:42.760   06:32:59	-- host/digest.sh@80 -- # bs=131072
00:22:42.760   06:32:59	-- host/digest.sh@80 -- # qd=16
00:22:42.760   06:32:59	-- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc
00:22:42.760   06:32:59	-- host/digest.sh@82 -- # bperfpid=86653
00:22:42.760   06:32:59	-- host/digest.sh@83 -- # waitforlisten 86653 /var/tmp/bperf.sock
00:22:42.760   06:32:59	-- common/autotest_common.sh@829 -- # '[' -z 86653 ']'
00:22:42.760   06:32:59	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock
00:22:42.760   06:32:59	-- common/autotest_common.sh@834 -- # local max_retries=100
00:22:42.760  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:22:42.760   06:32:59	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:22:42.760   06:32:59	-- common/autotest_common.sh@838 -- # xtrace_disable
00:22:42.760   06:32:59	-- common/autotest_common.sh@10 -- # set +x
00:22:42.760  I/O size of 131072 is greater than zero copy threshold (65536).
00:22:42.760  Zero copy mechanism will not be used.
00:22:42.760  [2024-12-16 06:32:59.615964] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:22:42.760  [2024-12-16 06:32:59.616060] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86653 ]
00:22:43.019  [2024-12-16 06:32:59.740082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:43.019  [2024-12-16 06:32:59.839207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:22:43.955   06:33:00	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:22:43.955   06:33:00	-- common/autotest_common.sh@862 -- # return 0
00:22:43.955   06:33:00	-- host/digest.sh@85 -- # [[ 0 -eq 1 ]]
00:22:43.955   06:33:00	-- host/digest.sh@86 -- # bperf_rpc framework_start_init
00:22:43.955   06:33:00	-- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init
00:22:44.213   06:33:00	-- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:22:44.213   06:33:00	-- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:22:44.472  nvme0n1
00:22:44.472   06:33:01	-- host/digest.sh@91 -- # bperf_py perform_tests
00:22:44.472   06:33:01	-- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:22:44.472  I/O size of 131072 is greater than zero copy threshold (65536).
00:22:44.472  Zero copy mechanism will not be used.
00:22:44.472  Running I/O for 2 seconds...
00:22:47.004  
00:22:47.004                                                                                                  Latency(us)
00:22:47.004  
[2024-12-16T06:33:03.980Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:47.004  
[2024-12-16T06:33:03.980Z]  Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072)
00:22:47.004  	 nvme0n1             :       2.00    8652.66    1081.58       0.00     0.00    1846.12     569.72    4349.21
00:22:47.004  
[2024-12-16T06:33:03.980Z]  ===================================================================================================================
00:22:47.004  
[2024-12-16T06:33:03.980Z]  Total                       :               8652.66    1081.58       0.00     0.00    1846.12     569.72    4349.21
00:22:47.004  0
00:22:47.004   06:33:03	-- host/digest.sh@92 -- # read -r acc_module acc_executed
00:22:47.004    06:33:03	-- host/digest.sh@92 -- # get_accel_stats
00:22:47.004    06:33:03	-- host/digest.sh@37 -- # jq -rc '.operations[]
00:22:47.004  			| select(.opcode=="crc32c")
00:22:47.004  			| "\(.module_name) \(.executed)"'
00:22:47.004    06:33:03	-- host/digest.sh@36 -- # bperf_rpc accel_get_stats
00:22:47.004    06:33:03	-- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats
00:22:47.004   06:33:03	-- host/digest.sh@93 -- # [[ 0 -eq 1 ]]
00:22:47.004   06:33:03	-- host/digest.sh@93 -- # exp_module=software
00:22:47.004   06:33:03	-- host/digest.sh@94 -- # (( acc_executed > 0 ))
00:22:47.004   06:33:03	-- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:22:47.004   06:33:03	-- host/digest.sh@97 -- # killprocess 86653
00:22:47.004   06:33:03	-- common/autotest_common.sh@936 -- # '[' -z 86653 ']'
00:22:47.004   06:33:03	-- common/autotest_common.sh@940 -- # kill -0 86653
00:22:47.004    06:33:03	-- common/autotest_common.sh@941 -- # uname
00:22:47.004   06:33:03	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:22:47.004    06:33:03	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86653
00:22:47.004   06:33:03	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:22:47.004   06:33:03	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:22:47.004   06:33:03	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 86653'
00:22:47.004  killing process with pid 86653
00:22:47.004   06:33:03	-- common/autotest_common.sh@955 -- # kill 86653
00:22:47.004  Received shutdown signal, test time was about 2.000000 seconds
00:22:47.004  
00:22:47.004                                                                                                  Latency(us)
00:22:47.004  
[2024-12-16T06:33:03.980Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:47.004  
[2024-12-16T06:33:03.980Z]  ===================================================================================================================
00:22:47.004  
[2024-12-16T06:33:03.980Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:22:47.004   06:33:03	-- common/autotest_common.sh@960 -- # wait 86653
00:22:47.263   06:33:03	-- host/digest.sh@124 -- # run_bperf randwrite 4096 128
00:22:47.263   06:33:03	-- host/digest.sh@77 -- # local rw bs qd
00:22:47.263   06:33:03	-- host/digest.sh@78 -- # local acc_module acc_executed exp_module
00:22:47.263   06:33:03	-- host/digest.sh@80 -- # rw=randwrite
00:22:47.263   06:33:03	-- host/digest.sh@80 -- # bs=4096
00:22:47.263   06:33:03	-- host/digest.sh@80 -- # qd=128
00:22:47.263   06:33:03	-- host/digest.sh@82 -- # bperfpid=86739
00:22:47.263   06:33:03	-- host/digest.sh@83 -- # waitforlisten 86739 /var/tmp/bperf.sock
00:22:47.263   06:33:03	-- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc
00:22:47.263   06:33:03	-- common/autotest_common.sh@829 -- # '[' -z 86739 ']'
00:22:47.263   06:33:03	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock
00:22:47.263   06:33:03	-- common/autotest_common.sh@834 -- # local max_retries=100
00:22:47.263  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:22:47.263   06:33:03	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:22:47.263   06:33:03	-- common/autotest_common.sh@838 -- # xtrace_disable
00:22:47.263   06:33:03	-- common/autotest_common.sh@10 -- # set +x
00:22:47.263  [2024-12-16 06:33:04.050085] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:22:47.263  [2024-12-16 06:33:04.050204] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86739 ]
00:22:47.263  [2024-12-16 06:33:04.186019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:47.521  [2024-12-16 06:33:04.294858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:22:48.088   06:33:04	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:22:48.088   06:33:04	-- common/autotest_common.sh@862 -- # return 0
00:22:48.088   06:33:04	-- host/digest.sh@85 -- # [[ 0 -eq 1 ]]
00:22:48.088   06:33:04	-- host/digest.sh@86 -- # bperf_rpc framework_start_init
00:22:48.088   06:33:04	-- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init
00:22:48.346   06:33:05	-- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:22:48.346   06:33:05	-- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:22:48.913  nvme0n1
00:22:48.913   06:33:05	-- host/digest.sh@91 -- # bperf_py perform_tests
00:22:48.913   06:33:05	-- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:22:48.913  Running I/O for 2 seconds...
00:22:50.816  
00:22:50.816                                                                                                  Latency(us)
00:22:50.816  
[2024-12-16T06:33:07.792Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:50.816  
[2024-12-16T06:33:07.792Z]  Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:22:50.816  	 nvme0n1             :       2.00   28953.91     113.10       0.00     0.00    4416.69    1876.71    8460.10
00:22:50.816  
[2024-12-16T06:33:07.792Z]  ===================================================================================================================
00:22:50.816  
[2024-12-16T06:33:07.792Z]  Total                       :              28953.91     113.10       0.00     0.00    4416.69    1876.71    8460.10
00:22:50.816  0
00:22:50.816   06:33:07	-- host/digest.sh@92 -- # read -r acc_module acc_executed
00:22:50.816    06:33:07	-- host/digest.sh@92 -- # get_accel_stats
00:22:50.816    06:33:07	-- host/digest.sh@36 -- # bperf_rpc accel_get_stats
00:22:50.816    06:33:07	-- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats
00:22:50.816    06:33:07	-- host/digest.sh@37 -- # jq -rc '.operations[]
00:22:50.816  			| select(.opcode=="crc32c")
00:22:50.816  			| "\(.module_name) \(.executed)"'
00:22:51.075   06:33:07	-- host/digest.sh@93 -- # [[ 0 -eq 1 ]]
00:22:51.075   06:33:07	-- host/digest.sh@93 -- # exp_module=software
00:22:51.075   06:33:07	-- host/digest.sh@94 -- # (( acc_executed > 0 ))
00:22:51.075   06:33:07	-- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:22:51.075   06:33:07	-- host/digest.sh@97 -- # killprocess 86739
00:22:51.075   06:33:07	-- common/autotest_common.sh@936 -- # '[' -z 86739 ']'
00:22:51.075   06:33:07	-- common/autotest_common.sh@940 -- # kill -0 86739
00:22:51.075    06:33:07	-- common/autotest_common.sh@941 -- # uname
00:22:51.075   06:33:08	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:22:51.075    06:33:08	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86739
00:22:51.075   06:33:08	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:22:51.075   06:33:08	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:22:51.075  killing process with pid 86739
00:22:51.075   06:33:08	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 86739'
00:22:51.075  Received shutdown signal, test time was about 2.000000 seconds
00:22:51.075  
00:22:51.075                                                                                                  Latency(us)
00:22:51.075  
[2024-12-16T06:33:08.051Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:51.075  
[2024-12-16T06:33:08.051Z]  ===================================================================================================================
00:22:51.075  
[2024-12-16T06:33:08.051Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:22:51.075   06:33:08	-- common/autotest_common.sh@955 -- # kill 86739
00:22:51.075   06:33:08	-- common/autotest_common.sh@960 -- # wait 86739
00:22:51.649   06:33:08	-- host/digest.sh@125 -- # run_bperf randwrite 131072 16
00:22:51.649   06:33:08	-- host/digest.sh@77 -- # local rw bs qd
00:22:51.649   06:33:08	-- host/digest.sh@78 -- # local acc_module acc_executed exp_module
00:22:51.649   06:33:08	-- host/digest.sh@80 -- # rw=randwrite
00:22:51.649   06:33:08	-- host/digest.sh@80 -- # bs=131072
00:22:51.649   06:33:08	-- host/digest.sh@80 -- # qd=16
00:22:51.649   06:33:08	-- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc
00:22:51.649   06:33:08	-- host/digest.sh@82 -- # bperfpid=86836
00:22:51.649   06:33:08	-- host/digest.sh@83 -- # waitforlisten 86836 /var/tmp/bperf.sock
00:22:51.649   06:33:08	-- common/autotest_common.sh@829 -- # '[' -z 86836 ']'
00:22:51.649   06:33:08	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock
00:22:51.649   06:33:08	-- common/autotest_common.sh@834 -- # local max_retries=100
00:22:51.649  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:22:51.649   06:33:08	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:22:51.649   06:33:08	-- common/autotest_common.sh@838 -- # xtrace_disable
00:22:51.649   06:33:08	-- common/autotest_common.sh@10 -- # set +x
00:22:51.649  [2024-12-16 06:33:08.369301] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:22:51.649  [2024-12-16 06:33:08.369417] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86836 ]
00:22:51.649  I/O size of 131072 is greater than zero copy threshold (65536).
00:22:51.649  Zero copy mechanism will not be used.
00:22:51.649  [2024-12-16 06:33:08.497805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:51.649  [2024-12-16 06:33:08.599022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:22:52.584   06:33:09	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:22:52.584   06:33:09	-- common/autotest_common.sh@862 -- # return 0
00:22:52.584   06:33:09	-- host/digest.sh@85 -- # [[ 0 -eq 1 ]]
00:22:52.584   06:33:09	-- host/digest.sh@86 -- # bperf_rpc framework_start_init
00:22:52.584   06:33:09	-- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init
00:22:52.842   06:33:09	-- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:22:52.842   06:33:09	-- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:22:53.101  nvme0n1
00:22:53.101   06:33:10	-- host/digest.sh@91 -- # bperf_py perform_tests
00:22:53.101   06:33:10	-- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:22:53.360  I/O size of 131072 is greater than zero copy threshold (65536).
00:22:53.360  Zero copy mechanism will not be used.
00:22:53.360  Running I/O for 2 seconds...
00:22:55.262  
00:22:55.262                                                                                                  Latency(us)
00:22:55.262  
[2024-12-16T06:33:12.238Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:55.262  
[2024-12-16T06:33:12.238Z]  Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072)
00:22:55.262  	 nvme0n1             :       2.00    8248.93    1031.12       0.00     0.00    1935.26    1638.40    8877.15
00:22:55.262  
[2024-12-16T06:33:12.238Z]  ===================================================================================================================
00:22:55.262  
[2024-12-16T06:33:12.238Z]  Total                       :               8248.93    1031.12       0.00     0.00    1935.26    1638.40    8877.15
00:22:55.262  0
00:22:55.262   06:33:12	-- host/digest.sh@92 -- # read -r acc_module acc_executed
00:22:55.262    06:33:12	-- host/digest.sh@92 -- # get_accel_stats
00:22:55.262    06:33:12	-- host/digest.sh@36 -- # bperf_rpc accel_get_stats
00:22:55.262    06:33:12	-- host/digest.sh@37 -- # jq -rc '.operations[]
00:22:55.262  			| select(.opcode=="crc32c")
00:22:55.262  			| "\(.module_name) \(.executed)"'
00:22:55.262    06:33:12	-- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats
00:22:55.521   06:33:12	-- host/digest.sh@93 -- # [[ 0 -eq 1 ]]
00:22:55.521   06:33:12	-- host/digest.sh@93 -- # exp_module=software
00:22:55.521   06:33:12	-- host/digest.sh@94 -- # (( acc_executed > 0 ))
00:22:55.521   06:33:12	-- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:22:55.521   06:33:12	-- host/digest.sh@97 -- # killprocess 86836
00:22:55.521   06:33:12	-- common/autotest_common.sh@936 -- # '[' -z 86836 ']'
00:22:55.521   06:33:12	-- common/autotest_common.sh@940 -- # kill -0 86836
00:22:55.521    06:33:12	-- common/autotest_common.sh@941 -- # uname
00:22:55.521   06:33:12	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:22:55.521    06:33:12	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86836
00:22:55.521   06:33:12	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:22:55.521   06:33:12	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:22:55.521  killing process with pid 86836
00:22:55.521   06:33:12	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 86836'
00:22:55.521  Received shutdown signal, test time was about 2.000000 seconds
00:22:55.521  
00:22:55.521                                                                                                  Latency(us)
00:22:55.521  
[2024-12-16T06:33:12.497Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:55.521  
[2024-12-16T06:33:12.497Z]  ===================================================================================================================
00:22:55.521  
[2024-12-16T06:33:12.497Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:22:55.521   06:33:12	-- common/autotest_common.sh@955 -- # kill 86836
00:22:55.521   06:33:12	-- common/autotest_common.sh@960 -- # wait 86836
00:22:55.780   06:33:12	-- host/digest.sh@126 -- # killprocess 86513
00:22:55.780   06:33:12	-- common/autotest_common.sh@936 -- # '[' -z 86513 ']'
00:22:55.780   06:33:12	-- common/autotest_common.sh@940 -- # kill -0 86513
00:22:55.780    06:33:12	-- common/autotest_common.sh@941 -- # uname
00:22:55.780   06:33:12	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:22:55.780    06:33:12	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86513
00:22:56.038   06:33:12	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:22:56.038   06:33:12	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:22:56.038  killing process with pid 86513
00:22:56.038   06:33:12	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 86513'
00:22:56.038   06:33:12	-- common/autotest_common.sh@955 -- # kill 86513
00:22:56.038   06:33:12	-- common/autotest_common.sh@960 -- # wait 86513
00:22:56.038  
00:22:56.038  real	0m18.978s
00:22:56.038  user	0m34.665s
00:22:56.038  sys	0m5.691s
00:22:56.038   06:33:12	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:22:56.038   06:33:12	-- common/autotest_common.sh@10 -- # set +x
00:22:56.038  ************************************
00:22:56.039  END TEST nvmf_digest_clean
00:22:56.039  ************************************
00:22:56.297   06:33:13	-- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error
00:22:56.297   06:33:13	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:22:56.297   06:33:13	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:22:56.297   06:33:13	-- common/autotest_common.sh@10 -- # set +x
00:22:56.297  ************************************
00:22:56.297  START TEST nvmf_digest_error
00:22:56.297  ************************************
00:22:56.297   06:33:13	-- common/autotest_common.sh@1114 -- # run_digest_error
00:22:56.297   06:33:13	-- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc
00:22:56.297   06:33:13	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:22:56.297   06:33:13	-- common/autotest_common.sh@722 -- # xtrace_disable
00:22:56.297   06:33:13	-- common/autotest_common.sh@10 -- # set +x
00:22:56.297   06:33:13	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc
00:22:56.297   06:33:13	-- nvmf/common.sh@469 -- # nvmfpid=86949
00:22:56.297   06:33:13	-- nvmf/common.sh@470 -- # waitforlisten 86949
00:22:56.297   06:33:13	-- common/autotest_common.sh@829 -- # '[' -z 86949 ']'
00:22:56.297   06:33:13	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:22:56.297   06:33:13	-- common/autotest_common.sh@834 -- # local max_retries=100
00:22:56.297   06:33:13	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:22:56.297  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:22:56.297   06:33:13	-- common/autotest_common.sh@838 -- # xtrace_disable
00:22:56.297   06:33:13	-- common/autotest_common.sh@10 -- # set +x
00:22:56.297  [2024-12-16 06:33:13.099712] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:22:56.297  [2024-12-16 06:33:13.099802] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:22:56.297  [2024-12-16 06:33:13.231895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:56.556  [2024-12-16 06:33:13.299186] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:22:56.556  [2024-12-16 06:33:13.299314] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:22:56.556  [2024-12-16 06:33:13.299326] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:22:56.556  [2024-12-16 06:33:13.299334] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:22:56.556  [2024-12-16 06:33:13.299364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:22:57.124   06:33:14	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:22:57.124   06:33:14	-- common/autotest_common.sh@862 -- # return 0
00:22:57.124   06:33:14	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:22:57.124   06:33:14	-- common/autotest_common.sh@728 -- # xtrace_disable
00:22:57.124   06:33:14	-- common/autotest_common.sh@10 -- # set +x
00:22:57.124   06:33:14	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:22:57.124   06:33:14	-- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error
00:22:57.124   06:33:14	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:57.124   06:33:14	-- common/autotest_common.sh@10 -- # set +x
00:22:57.124  [2024-12-16 06:33:14.059805] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error
00:22:57.124   06:33:14	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:57.124   06:33:14	-- host/digest.sh@104 -- # common_target_config
00:22:57.124   06:33:14	-- host/digest.sh@43 -- # rpc_cmd
00:22:57.124   06:33:14	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:57.124   06:33:14	-- common/autotest_common.sh@10 -- # set +x
00:22:57.382  null0
00:22:57.382  [2024-12-16 06:33:14.163713] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:22:57.382  [2024-12-16 06:33:14.187855] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:22:57.382   06:33:14	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:57.382   06:33:14	-- host/digest.sh@107 -- # run_bperf_err randread 4096 128
00:22:57.382   06:33:14	-- host/digest.sh@54 -- # local rw bs qd
00:22:57.382   06:33:14	-- host/digest.sh@56 -- # rw=randread
00:22:57.382   06:33:14	-- host/digest.sh@56 -- # bs=4096
00:22:57.382   06:33:14	-- host/digest.sh@56 -- # qd=128
00:22:57.383   06:33:14	-- host/digest.sh@58 -- # bperfpid=86994
00:22:57.383   06:33:14	-- host/digest.sh@60 -- # waitforlisten 86994 /var/tmp/bperf.sock
00:22:57.383   06:33:14	-- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z
00:22:57.383   06:33:14	-- common/autotest_common.sh@829 -- # '[' -z 86994 ']'
00:22:57.383   06:33:14	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock
00:22:57.383   06:33:14	-- common/autotest_common.sh@834 -- # local max_retries=100
00:22:57.383  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:22:57.383   06:33:14	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:22:57.383   06:33:14	-- common/autotest_common.sh@838 -- # xtrace_disable
00:22:57.383   06:33:14	-- common/autotest_common.sh@10 -- # set +x
00:22:57.383  [2024-12-16 06:33:14.252406] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:22:57.383  [2024-12-16 06:33:14.252507] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86994 ]
00:22:57.641  [2024-12-16 06:33:14.385677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:57.641  [2024-12-16 06:33:14.499401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:22:58.206   06:33:15	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:22:58.206   06:33:15	-- common/autotest_common.sh@862 -- # return 0
00:22:58.206   06:33:15	-- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:22:58.207   06:33:15	-- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:22:58.464   06:33:15	-- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable
00:22:58.464   06:33:15	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:58.464   06:33:15	-- common/autotest_common.sh@10 -- # set +x
00:22:58.464   06:33:15	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:58.464   06:33:15	-- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:22:58.464   06:33:15	-- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:22:58.722  nvme0n1
00:22:58.722   06:33:15	-- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256
00:22:58.722   06:33:15	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:58.722   06:33:15	-- common/autotest_common.sh@10 -- # set +x
00:22:58.722   06:33:15	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:58.722   06:33:15	-- host/digest.sh@69 -- # bperf_py perform_tests
00:22:58.722   06:33:15	-- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:22:58.980  Running I/O for 2 seconds...
00:22:58.981  [2024-12-16 06:33:15.787513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:58.981  [2024-12-16 06:33:15.787564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:58.981  [2024-12-16 06:33:15.787595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:58.981  [2024-12-16 06:33:15.799331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:58.981  [2024-12-16 06:33:15.799370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:58.981  [2024-12-16 06:33:15.799398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:58.981  [2024-12-16 06:33:15.809564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:58.981  [2024-12-16 06:33:15.809605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:58.981  [2024-12-16 06:33:15.809633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:58.981  [2024-12-16 06:33:15.819676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:58.981  [2024-12-16 06:33:15.819742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:58.981  [2024-12-16 06:33:15.819771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:58.981  [2024-12-16 06:33:15.830893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:58.981  [2024-12-16 06:33:15.830948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:58.981  [2024-12-16 06:33:15.830977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:58.981  [2024-12-16 06:33:15.841725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:58.981  [2024-12-16 06:33:15.841766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:58.981  [2024-12-16 06:33:15.841794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:58.981  [2024-12-16 06:33:15.850444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:58.981  [2024-12-16 06:33:15.850514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:58.981  [2024-12-16 06:33:15.850529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:58.981  [2024-12-16 06:33:15.859937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:58.981  [2024-12-16 06:33:15.859977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:58.981  [2024-12-16 06:33:15.860005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:58.981  [2024-12-16 06:33:15.869474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:58.981  [2024-12-16 06:33:15.869524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:58.981  [2024-12-16 06:33:15.869552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:58.981  [2024-12-16 06:33:15.879012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:58.981  [2024-12-16 06:33:15.879051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:58.981  [2024-12-16 06:33:15.879079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:58.981  [2024-12-16 06:33:15.890825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:58.981  [2024-12-16 06:33:15.890864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:58.981  [2024-12-16 06:33:15.890891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:58.981  [2024-12-16 06:33:15.901475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:58.981  [2024-12-16 06:33:15.901526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:58.981  [2024-12-16 06:33:15.901555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:58.981  [2024-12-16 06:33:15.912203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:58.981  [2024-12-16 06:33:15.912242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:58.981  [2024-12-16 06:33:15.912270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:58.981  [2024-12-16 06:33:15.923259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:58.981  [2024-12-16 06:33:15.923298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:58.981  [2024-12-16 06:33:15.923325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:58.981  [2024-12-16 06:33:15.932627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:58.981  [2024-12-16 06:33:15.932667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:58.981  [2024-12-16 06:33:15.932695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:58.981  [2024-12-16 06:33:15.943929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:58.981  [2024-12-16 06:33:15.943969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:58.981  [2024-12-16 06:33:15.943997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:58.981  [2024-12-16 06:33:15.954820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:58.981  [2024-12-16 06:33:15.954859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:58.981  [2024-12-16 06:33:15.954887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.275  [2024-12-16 06:33:15.966890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.275  [2024-12-16 06:33:15.966930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.275  [2024-12-16 06:33:15.966957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.275  [2024-12-16 06:33:15.978633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.275  [2024-12-16 06:33:15.978672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.275  [2024-12-16 06:33:15.978700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.275  [2024-12-16 06:33:15.988402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.275  [2024-12-16 06:33:15.988443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.275  [2024-12-16 06:33:15.988471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.275  [2024-12-16 06:33:15.999765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.275  [2024-12-16 06:33:15.999805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.275  [2024-12-16 06:33:15.999832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.275  [2024-12-16 06:33:16.010174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.275  [2024-12-16 06:33:16.010215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.275  [2024-12-16 06:33:16.010242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.275  [2024-12-16 06:33:16.019029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.275  [2024-12-16 06:33:16.019067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.275  [2024-12-16 06:33:16.019094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.275  [2024-12-16 06:33:16.029945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.275  [2024-12-16 06:33:16.029987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.275  [2024-12-16 06:33:16.030014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.275  [2024-12-16 06:33:16.042630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.275  [2024-12-16 06:33:16.042669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.275  [2024-12-16 06:33:16.042696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.275  [2024-12-16 06:33:16.055188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.275  [2024-12-16 06:33:16.055228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.275  [2024-12-16 06:33:16.055256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.275  [2024-12-16 06:33:16.067516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.275  [2024-12-16 06:33:16.067554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.275  [2024-12-16 06:33:16.067583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.275  [2024-12-16 06:33:16.079586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.275  [2024-12-16 06:33:16.079626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.275  [2024-12-16 06:33:16.079654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.275  [2024-12-16 06:33:16.092322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.275  [2024-12-16 06:33:16.092364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.275  [2024-12-16 06:33:16.092392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.275  [2024-12-16 06:33:16.104466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.275  [2024-12-16 06:33:16.104517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.275  [2024-12-16 06:33:16.104545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.275  [2024-12-16 06:33:16.112557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.275  [2024-12-16 06:33:16.112597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.275  [2024-12-16 06:33:16.112625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.275  [2024-12-16 06:33:16.124971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.275  [2024-12-16 06:33:16.125012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.275  [2024-12-16 06:33:16.125039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.275  [2024-12-16 06:33:16.137151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.275  [2024-12-16 06:33:16.137191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.275  [2024-12-16 06:33:16.137219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.275  [2024-12-16 06:33:16.147784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.275  [2024-12-16 06:33:16.147823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.275  [2024-12-16 06:33:16.147851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.275  [2024-12-16 06:33:16.156462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.275  [2024-12-16 06:33:16.156512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.275  [2024-12-16 06:33:16.156540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.275  [2024-12-16 06:33:16.168545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.275  [2024-12-16 06:33:16.168585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.275  [2024-12-16 06:33:16.168612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.275  [2024-12-16 06:33:16.181000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.275  [2024-12-16 06:33:16.181041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.275  [2024-12-16 06:33:16.181069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.275  [2024-12-16 06:33:16.192990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.275  [2024-12-16 06:33:16.193046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.275  [2024-12-16 06:33:16.193075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.275  [2024-12-16 06:33:16.204964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.275  [2024-12-16 06:33:16.205003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.275  [2024-12-16 06:33:16.205031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.275  [2024-12-16 06:33:16.217813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.275  [2024-12-16 06:33:16.217852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.275  [2024-12-16 06:33:16.217880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.275  [2024-12-16 06:33:16.229158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.275  [2024-12-16 06:33:16.229199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.275  [2024-12-16 06:33:16.229227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.561  [2024-12-16 06:33:16.239267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.561  [2024-12-16 06:33:16.239308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.561  [2024-12-16 06:33:16.239336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.561  [2024-12-16 06:33:16.251419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.561  [2024-12-16 06:33:16.251458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.561  [2024-12-16 06:33:16.251485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.561  [2024-12-16 06:33:16.260644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.561  [2024-12-16 06:33:16.260682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.561  [2024-12-16 06:33:16.260710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.561  [2024-12-16 06:33:16.269834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.561  [2024-12-16 06:33:16.269875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.561  [2024-12-16 06:33:16.269903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.561  [2024-12-16 06:33:16.279619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.561  [2024-12-16 06:33:16.279659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.561  [2024-12-16 06:33:16.279686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.561  [2024-12-16 06:33:16.289233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.561  [2024-12-16 06:33:16.289274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.561  [2024-12-16 06:33:16.289302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.561  [2024-12-16 06:33:16.298541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.561  [2024-12-16 06:33:16.298580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.561  [2024-12-16 06:33:16.298607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.561  [2024-12-16 06:33:16.307551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.561  [2024-12-16 06:33:16.307587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.561  [2024-12-16 06:33:16.307615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.561  [2024-12-16 06:33:16.316530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.561  [2024-12-16 06:33:16.316582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.561  [2024-12-16 06:33:16.316594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.561  [2024-12-16 06:33:16.326109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.561  [2024-12-16 06:33:16.326143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.561  [2024-12-16 06:33:16.326171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.561  [2024-12-16 06:33:16.339595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.561  [2024-12-16 06:33:16.339643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.561  [2024-12-16 06:33:16.339673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.561  [2024-12-16 06:33:16.349162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.561  [2024-12-16 06:33:16.349200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.561  [2024-12-16 06:33:16.349228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.561  [2024-12-16 06:33:16.359435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.561  [2024-12-16 06:33:16.359473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.561  [2024-12-16 06:33:16.359519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.561  [2024-12-16 06:33:16.369586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.561  [2024-12-16 06:33:16.369640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.561  [2024-12-16 06:33:16.369683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.561  [2024-12-16 06:33:16.379455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.561  [2024-12-16 06:33:16.379504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.561  [2024-12-16 06:33:16.379533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.561  [2024-12-16 06:33:16.392935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.561  [2024-12-16 06:33:16.392974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.561  [2024-12-16 06:33:16.393001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.561  [2024-12-16 06:33:16.403900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.561  [2024-12-16 06:33:16.403936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.561  [2024-12-16 06:33:16.403963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.561  [2024-12-16 06:33:16.413045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.561  [2024-12-16 06:33:16.413084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.561  [2024-12-16 06:33:16.413111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.561  [2024-12-16 06:33:16.424784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.561  [2024-12-16 06:33:16.424821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.561  [2024-12-16 06:33:16.424850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.561  [2024-12-16 06:33:16.436900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.561  [2024-12-16 06:33:16.436938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.561  [2024-12-16 06:33:16.436965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.561  [2024-12-16 06:33:16.448900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.562  [2024-12-16 06:33:16.448955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.562  [2024-12-16 06:33:16.448984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.562  [2024-12-16 06:33:16.458613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.562  [2024-12-16 06:33:16.458654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.562  [2024-12-16 06:33:16.458682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.562  [2024-12-16 06:33:16.471611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.562  [2024-12-16 06:33:16.471651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.562  [2024-12-16 06:33:16.471680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.562  [2024-12-16 06:33:16.483905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.562  [2024-12-16 06:33:16.483943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.562  [2024-12-16 06:33:16.483969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.562  [2024-12-16 06:33:16.496828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.562  [2024-12-16 06:33:16.496895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.562  [2024-12-16 06:33:16.496923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.562  [2024-12-16 06:33:16.509522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.562  [2024-12-16 06:33:16.509557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.562  [2024-12-16 06:33:16.509584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.562  [2024-12-16 06:33:16.520542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.562  [2024-12-16 06:33:16.520580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.562  [2024-12-16 06:33:16.520607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.562  [2024-12-16 06:33:16.531735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.562  [2024-12-16 06:33:16.531775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.562  [2024-12-16 06:33:16.531803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.821  [2024-12-16 06:33:16.542003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.821  [2024-12-16 06:33:16.542039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.821  [2024-12-16 06:33:16.542066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.821  [2024-12-16 06:33:16.553752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.821  [2024-12-16 06:33:16.553788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.821  [2024-12-16 06:33:16.553817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.821  [2024-12-16 06:33:16.564392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.821  [2024-12-16 06:33:16.564431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.821  [2024-12-16 06:33:16.564458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.821  [2024-12-16 06:33:16.574563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.821  [2024-12-16 06:33:16.574603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.821  [2024-12-16 06:33:16.574631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.821  [2024-12-16 06:33:16.586538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.821  [2024-12-16 06:33:16.586575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.821  [2024-12-16 06:33:16.586602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.821  [2024-12-16 06:33:16.598243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.821  [2024-12-16 06:33:16.598280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.821  [2024-12-16 06:33:16.598307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.821  [2024-12-16 06:33:16.607691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.821  [2024-12-16 06:33:16.607729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.821  [2024-12-16 06:33:16.607757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.821  [2024-12-16 06:33:16.619637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.821  [2024-12-16 06:33:16.619675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.821  [2024-12-16 06:33:16.619702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.821  [2024-12-16 06:33:16.631655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.821  [2024-12-16 06:33:16.631692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.821  [2024-12-16 06:33:16.631721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.821  [2024-12-16 06:33:16.644333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.821  [2024-12-16 06:33:16.644373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.821  [2024-12-16 06:33:16.644401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.821  [2024-12-16 06:33:16.656415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.821  [2024-12-16 06:33:16.656456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.821  [2024-12-16 06:33:16.656484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.821  [2024-12-16 06:33:16.666220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.821  [2024-12-16 06:33:16.666256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.821  [2024-12-16 06:33:16.666283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.821  [2024-12-16 06:33:16.674739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.821  [2024-12-16 06:33:16.674777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.821  [2024-12-16 06:33:16.674805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.821  [2024-12-16 06:33:16.686053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.821  [2024-12-16 06:33:16.686090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.821  [2024-12-16 06:33:16.686118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.821  [2024-12-16 06:33:16.696463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.821  [2024-12-16 06:33:16.696509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.821  [2024-12-16 06:33:16.696537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.821  [2024-12-16 06:33:16.705909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.821  [2024-12-16 06:33:16.705947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.821  [2024-12-16 06:33:16.705974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.821  [2024-12-16 06:33:16.716926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.821  [2024-12-16 06:33:16.716963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.821  [2024-12-16 06:33:16.716990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.821  [2024-12-16 06:33:16.726851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.821  [2024-12-16 06:33:16.726888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.821  [2024-12-16 06:33:16.726915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.821  [2024-12-16 06:33:16.737492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.821  [2024-12-16 06:33:16.737529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.821  [2024-12-16 06:33:16.737556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.822  [2024-12-16 06:33:16.746943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.822  [2024-12-16 06:33:16.746997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.822  [2024-12-16 06:33:16.747024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.822  [2024-12-16 06:33:16.759958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.822  [2024-12-16 06:33:16.759997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.822  [2024-12-16 06:33:16.760023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.822  [2024-12-16 06:33:16.771326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.822  [2024-12-16 06:33:16.771362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.822  [2024-12-16 06:33:16.771389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.822  [2024-12-16 06:33:16.780922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.822  [2024-12-16 06:33:16.780960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.822  [2024-12-16 06:33:16.780988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:22:59.822  [2024-12-16 06:33:16.792749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:22:59.822  [2024-12-16 06:33:16.792785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:22:59.822  [2024-12-16 06:33:16.792813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.081  [2024-12-16 06:33:16.805863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.081  [2024-12-16 06:33:16.805903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.081  [2024-12-16 06:33:16.805930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.081  [2024-12-16 06:33:16.817580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.081  [2024-12-16 06:33:16.817618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.081  [2024-12-16 06:33:16.817645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.081  [2024-12-16 06:33:16.828906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.081  [2024-12-16 06:33:16.828945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.081  [2024-12-16 06:33:16.828971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.081  [2024-12-16 06:33:16.841439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.081  [2024-12-16 06:33:16.841476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.081  [2024-12-16 06:33:16.841515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.081  [2024-12-16 06:33:16.852635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.081  [2024-12-16 06:33:16.852672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.081  [2024-12-16 06:33:16.852699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.081  [2024-12-16 06:33:16.861837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.081  [2024-12-16 06:33:16.861876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.081  [2024-12-16 06:33:16.861903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.081  [2024-12-16 06:33:16.872282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.081  [2024-12-16 06:33:16.872319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.081  [2024-12-16 06:33:16.872346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.081  [2024-12-16 06:33:16.881392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.081  [2024-12-16 06:33:16.881431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.081  [2024-12-16 06:33:16.881458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.081  [2024-12-16 06:33:16.893671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.081  [2024-12-16 06:33:16.893708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.081  [2024-12-16 06:33:16.893735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.081  [2024-12-16 06:33:16.905970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.081  [2024-12-16 06:33:16.906007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.081  [2024-12-16 06:33:16.906035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.081  [2024-12-16 06:33:16.917223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.081  [2024-12-16 06:33:16.917259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.081  [2024-12-16 06:33:16.917288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.081  [2024-12-16 06:33:16.926122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.081  [2024-12-16 06:33:16.926161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.081  [2024-12-16 06:33:16.926188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.081  [2024-12-16 06:33:16.937173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.081  [2024-12-16 06:33:16.937210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.081  [2024-12-16 06:33:16.937237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.081  [2024-12-16 06:33:16.947066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.081  [2024-12-16 06:33:16.947102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.081  [2024-12-16 06:33:16.947129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.081  [2024-12-16 06:33:16.956384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.081  [2024-12-16 06:33:16.956423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.081  [2024-12-16 06:33:16.956450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.081  [2024-12-16 06:33:16.965666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.081  [2024-12-16 06:33:16.965705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.081  [2024-12-16 06:33:16.965732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.081  [2024-12-16 06:33:16.974656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.081  [2024-12-16 06:33:16.974693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.081  [2024-12-16 06:33:16.974720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.081  [2024-12-16 06:33:16.985966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.081  [2024-12-16 06:33:16.986004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.081  [2024-12-16 06:33:16.986031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.081  [2024-12-16 06:33:16.995953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.081  [2024-12-16 06:33:16.995992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.081  [2024-12-16 06:33:16.996019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.081  [2024-12-16 06:33:17.007535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.081  [2024-12-16 06:33:17.007572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.081  [2024-12-16 06:33:17.007600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.081  [2024-12-16 06:33:17.019293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.081  [2024-12-16 06:33:17.019329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.081  [2024-12-16 06:33:17.019356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.081  [2024-12-16 06:33:17.028499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.081  [2024-12-16 06:33:17.028535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.081  [2024-12-16 06:33:17.028562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.081  [2024-12-16 06:33:17.038371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.082  [2024-12-16 06:33:17.038408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.082  [2024-12-16 06:33:17.038436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.082  [2024-12-16 06:33:17.049259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.082  [2024-12-16 06:33:17.049296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.082  [2024-12-16 06:33:17.049324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.340  [2024-12-16 06:33:17.058912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.341  [2024-12-16 06:33:17.058948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.341  [2024-12-16 06:33:17.058975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.341  [2024-12-16 06:33:17.071125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.341  [2024-12-16 06:33:17.071164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.341  [2024-12-16 06:33:17.071191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.341  [2024-12-16 06:33:17.083680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.341  [2024-12-16 06:33:17.083719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.341  [2024-12-16 06:33:17.083747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.341  [2024-12-16 06:33:17.096464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.341  [2024-12-16 06:33:17.096512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.341  [2024-12-16 06:33:17.096539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.341  [2024-12-16 06:33:17.105796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.341  [2024-12-16 06:33:17.105834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.341  [2024-12-16 06:33:17.105862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.341  [2024-12-16 06:33:17.116375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.341  [2024-12-16 06:33:17.116413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.341  [2024-12-16 06:33:17.116440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.341  [2024-12-16 06:33:17.125593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.341  [2024-12-16 06:33:17.125629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.341  [2024-12-16 06:33:17.125656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.341  [2024-12-16 06:33:17.138159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.341  [2024-12-16 06:33:17.138213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.341  [2024-12-16 06:33:17.138241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.341  [2024-12-16 06:33:17.150567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.341  [2024-12-16 06:33:17.150605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.341  [2024-12-16 06:33:17.150632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.341  [2024-12-16 06:33:17.159947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.341  [2024-12-16 06:33:17.159983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.341  [2024-12-16 06:33:17.160011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.341  [2024-12-16 06:33:17.172998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.341  [2024-12-16 06:33:17.173036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.341  [2024-12-16 06:33:17.173064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.341  [2024-12-16 06:33:17.181711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.341  [2024-12-16 06:33:17.181749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.341  [2024-12-16 06:33:17.181776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.341  [2024-12-16 06:33:17.193234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.341  [2024-12-16 06:33:17.193272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.341  [2024-12-16 06:33:17.193299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.341  [2024-12-16 06:33:17.204684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.341  [2024-12-16 06:33:17.204722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.341  [2024-12-16 06:33:17.204749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.341  [2024-12-16 06:33:17.214647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.341  [2024-12-16 06:33:17.214685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.341  [2024-12-16 06:33:17.214713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.341  [2024-12-16 06:33:17.226287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.341  [2024-12-16 06:33:17.226324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.341  [2024-12-16 06:33:17.226351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.341  [2024-12-16 06:33:17.235637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.341  [2024-12-16 06:33:17.235675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.341  [2024-12-16 06:33:17.235702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.341  [2024-12-16 06:33:17.245297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.341  [2024-12-16 06:33:17.245335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.341  [2024-12-16 06:33:17.245362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.341  [2024-12-16 06:33:17.254745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.341  [2024-12-16 06:33:17.254782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.341  [2024-12-16 06:33:17.254809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.341  [2024-12-16 06:33:17.264137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.341  [2024-12-16 06:33:17.264175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.341  [2024-12-16 06:33:17.264203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.341  [2024-12-16 06:33:17.273587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.341  [2024-12-16 06:33:17.273622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.341  [2024-12-16 06:33:17.273650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.341  [2024-12-16 06:33:17.284900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.341  [2024-12-16 06:33:17.284936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.341  [2024-12-16 06:33:17.284964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.341  [2024-12-16 06:33:17.296574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.341  [2024-12-16 06:33:17.296627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.341  [2024-12-16 06:33:17.296654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.341  [2024-12-16 06:33:17.305800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.341  [2024-12-16 06:33:17.305853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.341  [2024-12-16 06:33:17.305880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.601  [2024-12-16 06:33:17.318829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.601  [2024-12-16 06:33:17.318878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.601  [2024-12-16 06:33:17.318906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.601  [2024-12-16 06:33:17.331672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.601  [2024-12-16 06:33:17.331709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.601  [2024-12-16 06:33:17.331736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.601  [2024-12-16 06:33:17.343817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.601  [2024-12-16 06:33:17.343856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.601  [2024-12-16 06:33:17.343883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.601  [2024-12-16 06:33:17.356240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.601  [2024-12-16 06:33:17.356280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.601  [2024-12-16 06:33:17.356308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.601  [2024-12-16 06:33:17.368659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.601  [2024-12-16 06:33:17.368698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.601  [2024-12-16 06:33:17.368726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.601  [2024-12-16 06:33:17.376787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.601  [2024-12-16 06:33:17.376828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.601  [2024-12-16 06:33:17.376855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.601  [2024-12-16 06:33:17.389160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.601  [2024-12-16 06:33:17.389202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.601  [2024-12-16 06:33:17.389229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.601  [2024-12-16 06:33:17.401775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.601  [2024-12-16 06:33:17.401812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.601  [2024-12-16 06:33:17.401840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.601  [2024-12-16 06:33:17.413401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.601  [2024-12-16 06:33:17.413436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.601  [2024-12-16 06:33:17.413464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.601  [2024-12-16 06:33:17.424330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.601  [2024-12-16 06:33:17.424369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.601  [2024-12-16 06:33:17.424397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.601  [2024-12-16 06:33:17.433210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.601  [2024-12-16 06:33:17.433246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.601  [2024-12-16 06:33:17.433273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.601  [2024-12-16 06:33:17.442453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.601  [2024-12-16 06:33:17.442508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.601  [2024-12-16 06:33:17.442536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.601  [2024-12-16 06:33:17.452122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.601  [2024-12-16 06:33:17.452160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.601  [2024-12-16 06:33:17.452187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.601  [2024-12-16 06:33:17.461878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.601  [2024-12-16 06:33:17.461915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.601  [2024-12-16 06:33:17.461942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.601  [2024-12-16 06:33:17.472284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.601  [2024-12-16 06:33:17.472337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.601  [2024-12-16 06:33:17.472365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.601  [2024-12-16 06:33:17.481890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.601  [2024-12-16 06:33:17.481928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.601  [2024-12-16 06:33:17.481956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.601  [2024-12-16 06:33:17.492666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.601  [2024-12-16 06:33:17.492704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.601  [2024-12-16 06:33:17.492732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.601  [2024-12-16 06:33:17.504104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.601  [2024-12-16 06:33:17.504141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.601  [2024-12-16 06:33:17.504169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.601  [2024-12-16 06:33:17.516776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.601  [2024-12-16 06:33:17.516813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.601  [2024-12-16 06:33:17.516841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.601  [2024-12-16 06:33:17.525442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.601  [2024-12-16 06:33:17.525479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.601  [2024-12-16 06:33:17.525519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.601  [2024-12-16 06:33:17.537326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.601  [2024-12-16 06:33:17.537364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.601  [2024-12-16 06:33:17.537391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.601  [2024-12-16 06:33:17.550532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.601  [2024-12-16 06:33:17.550570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.601  [2024-12-16 06:33:17.550599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.601  [2024-12-16 06:33:17.563039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.601  [2024-12-16 06:33:17.563076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.601  [2024-12-16 06:33:17.563103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.860  [2024-12-16 06:33:17.576540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.860  [2024-12-16 06:33:17.576576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.860  [2024-12-16 06:33:17.576604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.860  [2024-12-16 06:33:17.588059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.860  [2024-12-16 06:33:17.588099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.860  [2024-12-16 06:33:17.588126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.860  [2024-12-16 06:33:17.599498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.860  [2024-12-16 06:33:17.599534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.860  [2024-12-16 06:33:17.599562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.860  [2024-12-16 06:33:17.608098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.860  [2024-12-16 06:33:17.608137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.860  [2024-12-16 06:33:17.608164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.860  [2024-12-16 06:33:17.623447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.860  [2024-12-16 06:33:17.623509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.860  [2024-12-16 06:33:17.623522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.861  [2024-12-16 06:33:17.632657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.861  [2024-12-16 06:33:17.632695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.861  [2024-12-16 06:33:17.632723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.861  [2024-12-16 06:33:17.642443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.861  [2024-12-16 06:33:17.642519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.861  [2024-12-16 06:33:17.642533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.861  [2024-12-16 06:33:17.651760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.861  [2024-12-16 06:33:17.651798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.861  [2024-12-16 06:33:17.651826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.861  [2024-12-16 06:33:17.660430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.861  [2024-12-16 06:33:17.660468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.861  [2024-12-16 06:33:17.660496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.861  [2024-12-16 06:33:17.671469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.861  [2024-12-16 06:33:17.671518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.861  [2024-12-16 06:33:17.671546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.861  [2024-12-16 06:33:17.682993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.861  [2024-12-16 06:33:17.683029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.861  [2024-12-16 06:33:17.683057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.861  [2024-12-16 06:33:17.693281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.861  [2024-12-16 06:33:17.693321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.861  [2024-12-16 06:33:17.693349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.861  [2024-12-16 06:33:17.702360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.861  [2024-12-16 06:33:17.702397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.861  [2024-12-16 06:33:17.702424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.861  [2024-12-16 06:33:17.712213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.861  [2024-12-16 06:33:17.712249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.861  [2024-12-16 06:33:17.712277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.861  [2024-12-16 06:33:17.723052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.861  [2024-12-16 06:33:17.723105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.861  [2024-12-16 06:33:17.723133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.861  [2024-12-16 06:33:17.735701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.861  [2024-12-16 06:33:17.735738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.861  [2024-12-16 06:33:17.735765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.861  [2024-12-16 06:33:17.746722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.861  [2024-12-16 06:33:17.746760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.861  [2024-12-16 06:33:17.746787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.861  [2024-12-16 06:33:17.755864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.861  [2024-12-16 06:33:17.755902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.861  [2024-12-16 06:33:17.755929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.861  [2024-12-16 06:33:17.766812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cbf50)
00:23:00.861  [2024-12-16 06:33:17.766849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:00.861  [2024-12-16 06:33:17.766876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:00.861  
00:23:00.861                                                                                                  Latency(us)
00:23:00.861  
[2024-12-16T06:33:17.837Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:00.861  
[2024-12-16T06:33:17.837Z]  Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096)
00:23:00.861  	 nvme0n1             :       2.00   23384.68      91.35       0.00     0.00    5469.08    2457.60   17754.30
00:23:00.861  
[2024-12-16T06:33:17.837Z]  ===================================================================================================================
00:23:00.861  
[2024-12-16T06:33:17.837Z]  Total                       :              23384.68      91.35       0.00     0.00    5469.08    2457.60   17754.30
00:23:00.861  0
00:23:00.861    06:33:17	-- host/digest.sh@71 -- # get_transient_errcount nvme0n1
00:23:00.861    06:33:17	-- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1
00:23:00.861    06:33:17	-- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1
00:23:00.861    06:33:17	-- host/digest.sh@28 -- # jq -r '.bdevs[0]
00:23:00.861  			| .driver_specific
00:23:00.861  			| .nvme_error
00:23:00.861  			| .status_code
00:23:00.861  			| .command_transient_transport_error'
00:23:01.120   06:33:18	-- host/digest.sh@71 -- # (( 183 > 0 ))
00:23:01.120   06:33:18	-- host/digest.sh@73 -- # killprocess 86994
00:23:01.120   06:33:18	-- common/autotest_common.sh@936 -- # '[' -z 86994 ']'
00:23:01.120   06:33:18	-- common/autotest_common.sh@940 -- # kill -0 86994
00:23:01.120    06:33:18	-- common/autotest_common.sh@941 -- # uname
00:23:01.120   06:33:18	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:23:01.120    06:33:18	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86994
00:23:01.378   06:33:18	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:23:01.378  killing process with pid 86994
00:23:01.378   06:33:18	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:23:01.378   06:33:18	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 86994'
00:23:01.378  Received shutdown signal, test time was about 2.000000 seconds
00:23:01.378  
00:23:01.378                                                                                                  Latency(us)
00:23:01.378  
[2024-12-16T06:33:18.354Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:01.378  
[2024-12-16T06:33:18.354Z]  ===================================================================================================================
00:23:01.378  
[2024-12-16T06:33:18.354Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:23:01.378   06:33:18	-- common/autotest_common.sh@955 -- # kill 86994
00:23:01.378   06:33:18	-- common/autotest_common.sh@960 -- # wait 86994
00:23:01.637   06:33:18	-- host/digest.sh@108 -- # run_bperf_err randread 131072 16
00:23:01.637   06:33:18	-- host/digest.sh@54 -- # local rw bs qd
00:23:01.637   06:33:18	-- host/digest.sh@56 -- # rw=randread
00:23:01.637   06:33:18	-- host/digest.sh@56 -- # bs=131072
00:23:01.637   06:33:18	-- host/digest.sh@56 -- # qd=16
00:23:01.637   06:33:18	-- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z
00:23:01.637   06:33:18	-- host/digest.sh@58 -- # bperfpid=87080
00:23:01.637   06:33:18	-- host/digest.sh@60 -- # waitforlisten 87080 /var/tmp/bperf.sock
00:23:01.637   06:33:18	-- common/autotest_common.sh@829 -- # '[' -z 87080 ']'
00:23:01.637   06:33:18	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock
00:23:01.637   06:33:18	-- common/autotest_common.sh@834 -- # local max_retries=100
00:23:01.637  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:23:01.637   06:33:18	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:23:01.637   06:33:18	-- common/autotest_common.sh@838 -- # xtrace_disable
00:23:01.637   06:33:18	-- common/autotest_common.sh@10 -- # set +x
00:23:01.637  I/O size of 131072 is greater than zero copy threshold (65536).
00:23:01.637  Zero copy mechanism will not be used.
00:23:01.637  [2024-12-16 06:33:18.452278] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:23:01.637  [2024-12-16 06:33:18.452380] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87080 ]
00:23:01.637  [2024-12-16 06:33:18.580321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:01.896  [2024-12-16 06:33:18.669934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:23:02.462   06:33:19	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:23:02.462   06:33:19	-- common/autotest_common.sh@862 -- # return 0
00:23:02.462   06:33:19	-- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:23:02.462   06:33:19	-- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:23:02.720   06:33:19	-- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable
00:23:02.720   06:33:19	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:02.720   06:33:19	-- common/autotest_common.sh@10 -- # set +x
00:23:02.978   06:33:19	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:02.978   06:33:19	-- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:23:02.978   06:33:19	-- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:23:03.237  nvme0n1
00:23:03.237   06:33:20	-- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32
00:23:03.237   06:33:20	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:03.237   06:33:20	-- common/autotest_common.sh@10 -- # set +x
00:23:03.237   06:33:20	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:03.237   06:33:20	-- host/digest.sh@69 -- # bperf_py perform_tests
00:23:03.237   06:33:20	-- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:23:03.237  I/O size of 131072 is greater than zero copy threshold (65536).
00:23:03.237  Zero copy mechanism will not be used.
00:23:03.237  Running I/O for 2 seconds...
00:23:03.237  [2024-12-16 06:33:20.185710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.237  [2024-12-16 06:33:20.185799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.237  [2024-12-16 06:33:20.185813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.237  [2024-12-16 06:33:20.189647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.237  [2024-12-16 06:33:20.189703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.237  [2024-12-16 06:33:20.189715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.237  [2024-12-16 06:33:20.194297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.237  [2024-12-16 06:33:20.194334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.237  [2024-12-16 06:33:20.194362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.237  [2024-12-16 06:33:20.197833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.237  [2024-12-16 06:33:20.197867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.237  [2024-12-16 06:33:20.197895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.237  [2024-12-16 06:33:20.201890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.237  [2024-12-16 06:33:20.201926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.237  [2024-12-16 06:33:20.201954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.237  [2024-12-16 06:33:20.205396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.237  [2024-12-16 06:33:20.205431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.237  [2024-12-16 06:33:20.205458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.237  [2024-12-16 06:33:20.209219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.237  [2024-12-16 06:33:20.209254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.237  [2024-12-16 06:33:20.209282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.497  [2024-12-16 06:33:20.213134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.497  [2024-12-16 06:33:20.213174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.497  [2024-12-16 06:33:20.213201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.497  [2024-12-16 06:33:20.217322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.497  [2024-12-16 06:33:20.217356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.497  [2024-12-16 06:33:20.217384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.497  [2024-12-16 06:33:20.220829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.497  [2024-12-16 06:33:20.220863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.497  [2024-12-16 06:33:20.220891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.497  [2024-12-16 06:33:20.224782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.497  [2024-12-16 06:33:20.224817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.497  [2024-12-16 06:33:20.224844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.497  [2024-12-16 06:33:20.228555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.497  [2024-12-16 06:33:20.228590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.497  [2024-12-16 06:33:20.228617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.497  [2024-12-16 06:33:20.232263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.497  [2024-12-16 06:33:20.232296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.498  [2024-12-16 06:33:20.232324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.498  [2024-12-16 06:33:20.236268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.498  [2024-12-16 06:33:20.236303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.498  [2024-12-16 06:33:20.236331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.498  [2024-12-16 06:33:20.240165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.498  [2024-12-16 06:33:20.240217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.498  [2024-12-16 06:33:20.240245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.498  [2024-12-16 06:33:20.244309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.498  [2024-12-16 06:33:20.244344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.498  [2024-12-16 06:33:20.244372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.498  [2024-12-16 06:33:20.247953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.498  [2024-12-16 06:33:20.247991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.498  [2024-12-16 06:33:20.248018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.498  [2024-12-16 06:33:20.252296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.498  [2024-12-16 06:33:20.252330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.498  [2024-12-16 06:33:20.252357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.498  [2024-12-16 06:33:20.255693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.498  [2024-12-16 06:33:20.255744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.498  [2024-12-16 06:33:20.255771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.498  [2024-12-16 06:33:20.259805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.498  [2024-12-16 06:33:20.259856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.498  [2024-12-16 06:33:20.259884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.498  [2024-12-16 06:33:20.263224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.498  [2024-12-16 06:33:20.263274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.498  [2024-12-16 06:33:20.263302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.498  [2024-12-16 06:33:20.267458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.498  [2024-12-16 06:33:20.267518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.498  [2024-12-16 06:33:20.267547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.498  [2024-12-16 06:33:20.271279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.498  [2024-12-16 06:33:20.271313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.498  [2024-12-16 06:33:20.271341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.498  [2024-12-16 06:33:20.273798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.498  [2024-12-16 06:33:20.273830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.498  [2024-12-16 06:33:20.273857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.498  [2024-12-16 06:33:20.277813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.498  [2024-12-16 06:33:20.277850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.498  [2024-12-16 06:33:20.277878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.498  [2024-12-16 06:33:20.281904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.498  [2024-12-16 06:33:20.281941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.498  [2024-12-16 06:33:20.281969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.498  [2024-12-16 06:33:20.285654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.498  [2024-12-16 06:33:20.285694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.498  [2024-12-16 06:33:20.285722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.498  [2024-12-16 06:33:20.289929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.498  [2024-12-16 06:33:20.289967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.498  [2024-12-16 06:33:20.289994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.498  [2024-12-16 06:33:20.293854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.498  [2024-12-16 06:33:20.293910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.498  [2024-12-16 06:33:20.293938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.498  [2024-12-16 06:33:20.297421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.498  [2024-12-16 06:33:20.297461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.498  [2024-12-16 06:33:20.297488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.498  [2024-12-16 06:33:20.301563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.498  [2024-12-16 06:33:20.301602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.498  [2024-12-16 06:33:20.301629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.498  [2024-12-16 06:33:20.306112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.498  [2024-12-16 06:33:20.306168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.498  [2024-12-16 06:33:20.306195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.498  [2024-12-16 06:33:20.309647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.498  [2024-12-16 06:33:20.309682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.498  [2024-12-16 06:33:20.309708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.498  [2024-12-16 06:33:20.313693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.498  [2024-12-16 06:33:20.313730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.498  [2024-12-16 06:33:20.313758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.498  [2024-12-16 06:33:20.317108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.498  [2024-12-16 06:33:20.317147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.498  [2024-12-16 06:33:20.317174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.498  [2024-12-16 06:33:20.320926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.498  [2024-12-16 06:33:20.320965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.498  [2024-12-16 06:33:20.320993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.498  [2024-12-16 06:33:20.324766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.498  [2024-12-16 06:33:20.324804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.498  [2024-12-16 06:33:20.324832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.498  [2024-12-16 06:33:20.328738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.498  [2024-12-16 06:33:20.328777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.498  [2024-12-16 06:33:20.328805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.332281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.332320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.332347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.336080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.336119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.336147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.340160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.340198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.340226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.344067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.344102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.344130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.347504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.347564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.347592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.352017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.352068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.352095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.355201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.355235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.355247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.359126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.359161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.359189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.362970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.363003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.363031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.366405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.366456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.366536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.370330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.370364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.370392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.373587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.373620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.373647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.376648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.376683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.376710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.380349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.380385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.380412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.384128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.384161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.384188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.387553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.387601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.387628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.391275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.391309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.391335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.395426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.395459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.395486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.399191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.399225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.399252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.402722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.402773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.402816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.406434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.406531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.406561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.410509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.410557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.410585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.414588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.414626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.414654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.418656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.418707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.418736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.422462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.422546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.422558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.426194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.426245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.426273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.430008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.430060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.499  [2024-12-16 06:33:20.430088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.499  [2024-12-16 06:33:20.433199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.499  [2024-12-16 06:33:20.433253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.500  [2024-12-16 06:33:20.433280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.500  [2024-12-16 06:33:20.437275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.500  [2024-12-16 06:33:20.437310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.500  [2024-12-16 06:33:20.437338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.500  [2024-12-16 06:33:20.441104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.500  [2024-12-16 06:33:20.441139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.500  [2024-12-16 06:33:20.441166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.500  [2024-12-16 06:33:20.445109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.500  [2024-12-16 06:33:20.445160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.500  [2024-12-16 06:33:20.445187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.500  [2024-12-16 06:33:20.449009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.500  [2024-12-16 06:33:20.449043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.500  [2024-12-16 06:33:20.449070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.500  [2024-12-16 06:33:20.452613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.500  [2024-12-16 06:33:20.452647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.500  [2024-12-16 06:33:20.452675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.500  [2024-12-16 06:33:20.456791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.500  [2024-12-16 06:33:20.456825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.500  [2024-12-16 06:33:20.456852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.500  [2024-12-16 06:33:20.460579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.500  [2024-12-16 06:33:20.460613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.500  [2024-12-16 06:33:20.460641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.500  [2024-12-16 06:33:20.464062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.500  [2024-12-16 06:33:20.464095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.500  [2024-12-16 06:33:20.464122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.500  [2024-12-16 06:33:20.467828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.500  [2024-12-16 06:33:20.467867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.500  [2024-12-16 06:33:20.467895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.760  [2024-12-16 06:33:20.471746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.760  [2024-12-16 06:33:20.471781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.760  [2024-12-16 06:33:20.471808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.760  [2024-12-16 06:33:20.475511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.760  [2024-12-16 06:33:20.475553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.760  [2024-12-16 06:33:20.475581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.760  [2024-12-16 06:33:20.479672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.760  [2024-12-16 06:33:20.479710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.760  [2024-12-16 06:33:20.479738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.760  [2024-12-16 06:33:20.483211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.760  [2024-12-16 06:33:20.483245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.760  [2024-12-16 06:33:20.483273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.760  [2024-12-16 06:33:20.487008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.760  [2024-12-16 06:33:20.487047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.760  [2024-12-16 06:33:20.487075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.760  [2024-12-16 06:33:20.490133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.760  [2024-12-16 06:33:20.490184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.760  [2024-12-16 06:33:20.490212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.760  [2024-12-16 06:33:20.493710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.760  [2024-12-16 06:33:20.493761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.760  [2024-12-16 06:33:20.493789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.760  [2024-12-16 06:33:20.497040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.760  [2024-12-16 06:33:20.497097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.760  [2024-12-16 06:33:20.497125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.760  [2024-12-16 06:33:20.501114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.760  [2024-12-16 06:33:20.501153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.760  [2024-12-16 06:33:20.501181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.760  [2024-12-16 06:33:20.505668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.760  [2024-12-16 06:33:20.505722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.760  [2024-12-16 06:33:20.505749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.760  [2024-12-16 06:33:20.509660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.760  [2024-12-16 06:33:20.509711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.760  [2024-12-16 06:33:20.509739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.760  [2024-12-16 06:33:20.513100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.760  [2024-12-16 06:33:20.513155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.760  [2024-12-16 06:33:20.513182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.761  [2024-12-16 06:33:20.516739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.761  [2024-12-16 06:33:20.516778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.761  [2024-12-16 06:33:20.516805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.761  [2024-12-16 06:33:20.521081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.761  [2024-12-16 06:33:20.521120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.761  [2024-12-16 06:33:20.521147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.761  [2024-12-16 06:33:20.525240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.761  [2024-12-16 06:33:20.525280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.761  [2024-12-16 06:33:20.525307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.761  [2024-12-16 06:33:20.529245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.761  [2024-12-16 06:33:20.529299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.761  [2024-12-16 06:33:20.529327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.761  [2024-12-16 06:33:20.533107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.761  [2024-12-16 06:33:20.533145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.761  [2024-12-16 06:33:20.533172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.761  [2024-12-16 06:33:20.536779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.761  [2024-12-16 06:33:20.536835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.761  [2024-12-16 06:33:20.536878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.761  [2024-12-16 06:33:20.541089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.761  [2024-12-16 06:33:20.541144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.761  [2024-12-16 06:33:20.541171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.761  [2024-12-16 06:33:20.545185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.761  [2024-12-16 06:33:20.545255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.761  [2024-12-16 06:33:20.545283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.761  [2024-12-16 06:33:20.549189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.761  [2024-12-16 06:33:20.549244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.761  [2024-12-16 06:33:20.549271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.761  [2024-12-16 06:33:20.553612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.761  [2024-12-16 06:33:20.553668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.761  [2024-12-16 06:33:20.553696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.761  [2024-12-16 06:33:20.557911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.761  [2024-12-16 06:33:20.557951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.761  [2024-12-16 06:33:20.557978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.761  [2024-12-16 06:33:20.562189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.761  [2024-12-16 06:33:20.562227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.761  [2024-12-16 06:33:20.562254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.761  [2024-12-16 06:33:20.565906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.761  [2024-12-16 06:33:20.565941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.761  [2024-12-16 06:33:20.565969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.761  [2024-12-16 06:33:20.569899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.761  [2024-12-16 06:33:20.569940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.761  [2024-12-16 06:33:20.569967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.761  [2024-12-16 06:33:20.573648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.761  [2024-12-16 06:33:20.573701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.761  [2024-12-16 06:33:20.573729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.761  [2024-12-16 06:33:20.577776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.761  [2024-12-16 06:33:20.577845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.761  [2024-12-16 06:33:20.577872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.761  [2024-12-16 06:33:20.581675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.761  [2024-12-16 06:33:20.581714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.761  [2024-12-16 06:33:20.581742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.761  [2024-12-16 06:33:20.584559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.761  [2024-12-16 06:33:20.584613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.761  [2024-12-16 06:33:20.584641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.761  [2024-12-16 06:33:20.588413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.761  [2024-12-16 06:33:20.588454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.761  [2024-12-16 06:33:20.588482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.761  [2024-12-16 06:33:20.592138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.761  [2024-12-16 06:33:20.592193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.761  [2024-12-16 06:33:20.592220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.761  [2024-12-16 06:33:20.595935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.761  [2024-12-16 06:33:20.595976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.761  [2024-12-16 06:33:20.596003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.761  [2024-12-16 06:33:20.599753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.761  [2024-12-16 06:33:20.599793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.761  [2024-12-16 06:33:20.599820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.761  [2024-12-16 06:33:20.603251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.761  [2024-12-16 06:33:20.603289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.761  [2024-12-16 06:33:20.603316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.761  [2024-12-16 06:33:20.607678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.761  [2024-12-16 06:33:20.607719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.761  [2024-12-16 06:33:20.607746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.761  [2024-12-16 06:33:20.611550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.761  [2024-12-16 06:33:20.611588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.761  [2024-12-16 06:33:20.611615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.761  [2024-12-16 06:33:20.615154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.615193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.762  [2024-12-16 06:33:20.615220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.762  [2024-12-16 06:33:20.618709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.618765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.762  [2024-12-16 06:33:20.618807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.762  [2024-12-16 06:33:20.622042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.622075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.762  [2024-12-16 06:33:20.622103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.762  [2024-12-16 06:33:20.626077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.626129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.762  [2024-12-16 06:33:20.626157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.762  [2024-12-16 06:33:20.629738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.629790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.762  [2024-12-16 06:33:20.629818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.762  [2024-12-16 06:33:20.632838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.632907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.762  [2024-12-16 06:33:20.632934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.762  [2024-12-16 06:33:20.636332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.636371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.762  [2024-12-16 06:33:20.636398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.762  [2024-12-16 06:33:20.641016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.641072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.762  [2024-12-16 06:33:20.641100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.762  [2024-12-16 06:33:20.645330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.645386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.762  [2024-12-16 06:33:20.645414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.762  [2024-12-16 06:33:20.649468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.649520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.762  [2024-12-16 06:33:20.649547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.762  [2024-12-16 06:33:20.653664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.653704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.762  [2024-12-16 06:33:20.653732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.762  [2024-12-16 06:33:20.657261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.657299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.762  [2024-12-16 06:33:20.657325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.762  [2024-12-16 06:33:20.661676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.661714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.762  [2024-12-16 06:33:20.661742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.762  [2024-12-16 06:33:20.665132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.665170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.762  [2024-12-16 06:33:20.665198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.762  [2024-12-16 06:33:20.669332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.669373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.762  [2024-12-16 06:33:20.669400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.762  [2024-12-16 06:33:20.673017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.673075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.762  [2024-12-16 06:33:20.673102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.762  [2024-12-16 06:33:20.676848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.676888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.762  [2024-12-16 06:33:20.676915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.762  [2024-12-16 06:33:20.680555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.680595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.762  [2024-12-16 06:33:20.680623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.762  [2024-12-16 06:33:20.684204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.684244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.762  [2024-12-16 06:33:20.684271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.762  [2024-12-16 06:33:20.687897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.687936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.762  [2024-12-16 06:33:20.687964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.762  [2024-12-16 06:33:20.691996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.692036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.762  [2024-12-16 06:33:20.692063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.762  [2024-12-16 06:33:20.696063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.696103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.762  [2024-12-16 06:33:20.696130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.762  [2024-12-16 06:33:20.700086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.700122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.762  [2024-12-16 06:33:20.700150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.762  [2024-12-16 06:33:20.703637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.703676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.762  [2024-12-16 06:33:20.703703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.762  [2024-12-16 06:33:20.707479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.707561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.762  [2024-12-16 06:33:20.707590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.762  [2024-12-16 06:33:20.711527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.711566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.762  [2024-12-16 06:33:20.711593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.762  [2024-12-16 06:33:20.714747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.762  [2024-12-16 06:33:20.714818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.763  [2024-12-16 06:33:20.714846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:03.763  [2024-12-16 06:33:20.718217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.763  [2024-12-16 06:33:20.718268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.763  [2024-12-16 06:33:20.718295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:03.763  [2024-12-16 06:33:20.721847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.763  [2024-12-16 06:33:20.721914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.763  [2024-12-16 06:33:20.721942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:03.763  [2024-12-16 06:33:20.725333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.763  [2024-12-16 06:33:20.725385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.763  [2024-12-16 06:33:20.725413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:03.763  [2024-12-16 06:33:20.730175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:03.763  [2024-12-16 06:33:20.730225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:03.763  [2024-12-16 06:33:20.730253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.023  [2024-12-16 06:33:20.734984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.023  [2024-12-16 06:33:20.735038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.023  [2024-12-16 06:33:20.735077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.023  [2024-12-16 06:33:20.739039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.023  [2024-12-16 06:33:20.739078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.023  [2024-12-16 06:33:20.739105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.023  [2024-12-16 06:33:20.742771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.023  [2024-12-16 06:33:20.742855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.023  [2024-12-16 06:33:20.742882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.023  [2024-12-16 06:33:20.746451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.023  [2024-12-16 06:33:20.746542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.023  [2024-12-16 06:33:20.746570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.023  [2024-12-16 06:33:20.750346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.023  [2024-12-16 06:33:20.750380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.023  [2024-12-16 06:33:20.750408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.023  [2024-12-16 06:33:20.754365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.023  [2024-12-16 06:33:20.754400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.023  [2024-12-16 06:33:20.754427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.023  [2024-12-16 06:33:20.758623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.023  [2024-12-16 06:33:20.758663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.023  [2024-12-16 06:33:20.758691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.023  [2024-12-16 06:33:20.762217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.023  [2024-12-16 06:33:20.762250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.023  [2024-12-16 06:33:20.762279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.023  [2024-12-16 06:33:20.766374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.023  [2024-12-16 06:33:20.766409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.023  [2024-12-16 06:33:20.766436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.023  [2024-12-16 06:33:20.770216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.023  [2024-12-16 06:33:20.770250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.023  [2024-12-16 06:33:20.770277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.023  [2024-12-16 06:33:20.774224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.023  [2024-12-16 06:33:20.774259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.023  [2024-12-16 06:33:20.774286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.023  [2024-12-16 06:33:20.778213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.023  [2024-12-16 06:33:20.778263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.023  [2024-12-16 06:33:20.778291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.023  [2024-12-16 06:33:20.781705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.023  [2024-12-16 06:33:20.781740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.023  [2024-12-16 06:33:20.781768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.023  [2024-12-16 06:33:20.785112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.023  [2024-12-16 06:33:20.785150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.023  [2024-12-16 06:33:20.785177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.023  [2024-12-16 06:33:20.788523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.023  [2024-12-16 06:33:20.788575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.023  [2024-12-16 06:33:20.788602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.023  [2024-12-16 06:33:20.791869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.023  [2024-12-16 06:33:20.791923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.023  [2024-12-16 06:33:20.791950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.023  [2024-12-16 06:33:20.795629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.023  [2024-12-16 06:33:20.795668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.023  [2024-12-16 06:33:20.795695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.023  [2024-12-16 06:33:20.799181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.023  [2024-12-16 06:33:20.799235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.023  [2024-12-16 06:33:20.799263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.024  [2024-12-16 06:33:20.802875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.024  [2024-12-16 06:33:20.802914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.024  [2024-12-16 06:33:20.802941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.024  [2024-12-16 06:33:20.806356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.024  [2024-12-16 06:33:20.806406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.024  [2024-12-16 06:33:20.806433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.024  [2024-12-16 06:33:20.810834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.024  [2024-12-16 06:33:20.810872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.024  [2024-12-16 06:33:20.810899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.024  [2024-12-16 06:33:20.814191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.024  [2024-12-16 06:33:20.814226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.024  [2024-12-16 06:33:20.814253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.024  [2024-12-16 06:33:20.817927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.024  [2024-12-16 06:33:20.817963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.024  [2024-12-16 06:33:20.817990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.024  [2024-12-16 06:33:20.821449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.024  [2024-12-16 06:33:20.821513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.024  [2024-12-16 06:33:20.821527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.024  [2024-12-16 06:33:20.825264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.024  [2024-12-16 06:33:20.825319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.024  [2024-12-16 06:33:20.825347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.024  [2024-12-16 06:33:20.829437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.024  [2024-12-16 06:33:20.829475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.024  [2024-12-16 06:33:20.829516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.024  [2024-12-16 06:33:20.832937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.024  [2024-12-16 06:33:20.832976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.024  [2024-12-16 06:33:20.833004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.024  [2024-12-16 06:33:20.836815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.024  [2024-12-16 06:33:20.836872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.024  [2024-12-16 06:33:20.836914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.024  [2024-12-16 06:33:20.840952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.024  [2024-12-16 06:33:20.840991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.024  [2024-12-16 06:33:20.841019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.024  [2024-12-16 06:33:20.844943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.024  [2024-12-16 06:33:20.844982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.024  [2024-12-16 06:33:20.845008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.024  [2024-12-16 06:33:20.849193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.024  [2024-12-16 06:33:20.849251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.024  [2024-12-16 06:33:20.849278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.024  [2024-12-16 06:33:20.853273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.024  [2024-12-16 06:33:20.853313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.024  [2024-12-16 06:33:20.853340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.024  [2024-12-16 06:33:20.857346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.024  [2024-12-16 06:33:20.857381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.024  [2024-12-16 06:33:20.857408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.024  [2024-12-16 06:33:20.861438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.024  [2024-12-16 06:33:20.861519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.024  [2024-12-16 06:33:20.861532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.024  [2024-12-16 06:33:20.865118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.024  [2024-12-16 06:33:20.865158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.024  [2024-12-16 06:33:20.865184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.024  [2024-12-16 06:33:20.868593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.024  [2024-12-16 06:33:20.868649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.024  [2024-12-16 06:33:20.868676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.024  [2024-12-16 06:33:20.872585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.024  [2024-12-16 06:33:20.872641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.024  [2024-12-16 06:33:20.872668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.024  [2024-12-16 06:33:20.876565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.024  [2024-12-16 06:33:20.876620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.024  [2024-12-16 06:33:20.876648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.024  [2024-12-16 06:33:20.879827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.024  [2024-12-16 06:33:20.879868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.024  [2024-12-16 06:33:20.879895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.024  [2024-12-16 06:33:20.883751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.024  [2024-12-16 06:33:20.883791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.024  [2024-12-16 06:33:20.883818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.024  [2024-12-16 06:33:20.887571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.024  [2024-12-16 06:33:20.887609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.024  [2024-12-16 06:33:20.887637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.024  [2024-12-16 06:33:20.891093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.024  [2024-12-16 06:33:20.891132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.024  [2024-12-16 06:33:20.891159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.024  [2024-12-16 06:33:20.895173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.024  [2024-12-16 06:33:20.895212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.024  [2024-12-16 06:33:20.895239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.024  [2024-12-16 06:33:20.899174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.024  [2024-12-16 06:33:20.899211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.025  [2024-12-16 06:33:20.899238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.025  [2024-12-16 06:33:20.902591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.025  [2024-12-16 06:33:20.902644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.025  [2024-12-16 06:33:20.902672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.025  [2024-12-16 06:33:20.905744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.025  [2024-12-16 06:33:20.905795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.025  [2024-12-16 06:33:20.905822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.025  [2024-12-16 06:33:20.909574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.025  [2024-12-16 06:33:20.909614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.025  [2024-12-16 06:33:20.909641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.025  [2024-12-16 06:33:20.913334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.025  [2024-12-16 06:33:20.913374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.025  [2024-12-16 06:33:20.913401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.025  [2024-12-16 06:33:20.917779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.025  [2024-12-16 06:33:20.917837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.025  [2024-12-16 06:33:20.917864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.025  [2024-12-16 06:33:20.921756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.025  [2024-12-16 06:33:20.921793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.025  [2024-12-16 06:33:20.921820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.025  [2024-12-16 06:33:20.926004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.025  [2024-12-16 06:33:20.926043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.025  [2024-12-16 06:33:20.926071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.025  [2024-12-16 06:33:20.930244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.025  [2024-12-16 06:33:20.930282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.025  [2024-12-16 06:33:20.930308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.025  [2024-12-16 06:33:20.934055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.025  [2024-12-16 06:33:20.934105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.025  [2024-12-16 06:33:20.934133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.025  [2024-12-16 06:33:20.938151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.025  [2024-12-16 06:33:20.938206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.025  [2024-12-16 06:33:20.938234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.025  [2024-12-16 06:33:20.941701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.025  [2024-12-16 06:33:20.941752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.025  [2024-12-16 06:33:20.941780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.025  [2024-12-16 06:33:20.945322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.025  [2024-12-16 06:33:20.945358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.025  [2024-12-16 06:33:20.945384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.025  [2024-12-16 06:33:20.949334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.025  [2024-12-16 06:33:20.949370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.025  [2024-12-16 06:33:20.949397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.025  [2024-12-16 06:33:20.953186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.025  [2024-12-16 06:33:20.953226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.025  [2024-12-16 06:33:20.953253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.025  [2024-12-16 06:33:20.956970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.025  [2024-12-16 06:33:20.957009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.025  [2024-12-16 06:33:20.957035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.025  [2024-12-16 06:33:20.961019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.025  [2024-12-16 06:33:20.961074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.025  [2024-12-16 06:33:20.961101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.025  [2024-12-16 06:33:20.964905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.025  [2024-12-16 06:33:20.964945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.025  [2024-12-16 06:33:20.964972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.025  [2024-12-16 06:33:20.968456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.025  [2024-12-16 06:33:20.968508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.025  [2024-12-16 06:33:20.968537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.025  [2024-12-16 06:33:20.972510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.025  [2024-12-16 06:33:20.972576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.025  [2024-12-16 06:33:20.972587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.025  [2024-12-16 06:33:20.976205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.025  [2024-12-16 06:33:20.976246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.025  [2024-12-16 06:33:20.976274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.025  [2024-12-16 06:33:20.979534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.025  [2024-12-16 06:33:20.979572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.025  [2024-12-16 06:33:20.979599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.025  [2024-12-16 06:33:20.983105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.025  [2024-12-16 06:33:20.983159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.025  [2024-12-16 06:33:20.983187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.025  [2024-12-16 06:33:20.986729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.025  [2024-12-16 06:33:20.986813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.025  [2024-12-16 06:33:20.986824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.025  [2024-12-16 06:33:20.990590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.025  [2024-12-16 06:33:20.990646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.025  [2024-12-16 06:33:20.990658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.025  [2024-12-16 06:33:20.994112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.025  [2024-12-16 06:33:20.994144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.025  [2024-12-16 06:33:20.994171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.286  [2024-12-16 06:33:20.998175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.286  [2024-12-16 06:33:20.998208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.286  [2024-12-16 06:33:20.998235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.286  [2024-12-16 06:33:21.002288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.286  [2024-12-16 06:33:21.002323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.286  [2024-12-16 06:33:21.002350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.286  [2024-12-16 06:33:21.006202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.286  [2024-12-16 06:33:21.006236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.286  [2024-12-16 06:33:21.006263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.286  [2024-12-16 06:33:21.009923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.286  [2024-12-16 06:33:21.009957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.286  [2024-12-16 06:33:21.009984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.286  [2024-12-16 06:33:21.013816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.286  [2024-12-16 06:33:21.013853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.286  [2024-12-16 06:33:21.013881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.286  [2024-12-16 06:33:21.017126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.286  [2024-12-16 06:33:21.017164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.286  [2024-12-16 06:33:21.017191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.286  [2024-12-16 06:33:21.021482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.286  [2024-12-16 06:33:21.021532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.286  [2024-12-16 06:33:21.021560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.286  [2024-12-16 06:33:21.025215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.286  [2024-12-16 06:33:21.025255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.286  [2024-12-16 06:33:21.025282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.286  [2024-12-16 06:33:21.029112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.286  [2024-12-16 06:33:21.029154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.286  [2024-12-16 06:33:21.029181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.286  [2024-12-16 06:33:21.032712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.286  [2024-12-16 06:33:21.032752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.286  [2024-12-16 06:33:21.032779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.286  [2024-12-16 06:33:21.036533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.286  [2024-12-16 06:33:21.036586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.286  [2024-12-16 06:33:21.036613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.286  [2024-12-16 06:33:21.039492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.286  [2024-12-16 06:33:21.039541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.286  [2024-12-16 06:33:21.039569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.286  [2024-12-16 06:33:21.043633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.286  [2024-12-16 06:33:21.043670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.286  [2024-12-16 06:33:21.043698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.286  [2024-12-16 06:33:21.047061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.286  [2024-12-16 06:33:21.047098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.286  [2024-12-16 06:33:21.047126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.286  [2024-12-16 06:33:21.050889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.286  [2024-12-16 06:33:21.050926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.286  [2024-12-16 06:33:21.050953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.286  [2024-12-16 06:33:21.054492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.286  [2024-12-16 06:33:21.054552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.286  [2024-12-16 06:33:21.054564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.286  [2024-12-16 06:33:21.057913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.286  [2024-12-16 06:33:21.057947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.286  [2024-12-16 06:33:21.057973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.286  [2024-12-16 06:33:21.061735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.286  [2024-12-16 06:33:21.061773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.286  [2024-12-16 06:33:21.061800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.286  [2024-12-16 06:33:21.065968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.286  [2024-12-16 06:33:21.066008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.286  [2024-12-16 06:33:21.066035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.286  [2024-12-16 06:33:21.070320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.286  [2024-12-16 06:33:21.070360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.286  [2024-12-16 06:33:21.070387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.286  [2024-12-16 06:33:21.074248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.286  [2024-12-16 06:33:21.074299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.286  [2024-12-16 06:33:21.074326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.286  [2024-12-16 06:33:21.077883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.286  [2024-12-16 06:33:21.077933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.286  [2024-12-16 06:33:21.077960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.286  [2024-12-16 06:33:21.081816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.286  [2024-12-16 06:33:21.081855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.286  [2024-12-16 06:33:21.081881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.286  [2024-12-16 06:33:21.085420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.286  [2024-12-16 06:33:21.085476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.286  [2024-12-16 06:33:21.085515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.286  [2024-12-16 06:33:21.089640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.089679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.089706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.093179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.093219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.093247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.097226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.097267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.097294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.101077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.101132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.101160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.104919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.104958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.104986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.108420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.108462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.108489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.112388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.112426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.112453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.116163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.116218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.116245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.120074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.120115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.120142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.123859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.123928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.123955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.127419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.127473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.127510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.131550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.131587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.131615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.135147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.135183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.135211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.139160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.139197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.139224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.141761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.141809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.141836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.145400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.145440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.145467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.148780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.148835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.148863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.152557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.152595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.152622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.156457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.156530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.156559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.160223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.160263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.160289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.163801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.163842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.163869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.167387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.167424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.167451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.171147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.171185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.171212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.174682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.174738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.174766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.178828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.178895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.178921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.182945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.182999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.183027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.186496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.186572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.186600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.287  [2024-12-16 06:33:21.190554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.287  [2024-12-16 06:33:21.190612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.287  [2024-12-16 06:33:21.190624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.288  [2024-12-16 06:33:21.194591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.288  [2024-12-16 06:33:21.194664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.288  [2024-12-16 06:33:21.194677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.288  [2024-12-16 06:33:21.198746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.288  [2024-12-16 06:33:21.198803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.288  [2024-12-16 06:33:21.198831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.288  [2024-12-16 06:33:21.202417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.288  [2024-12-16 06:33:21.202489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.288  [2024-12-16 06:33:21.202538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.288  [2024-12-16 06:33:21.206534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.288  [2024-12-16 06:33:21.206573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.288  [2024-12-16 06:33:21.206601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.288  [2024-12-16 06:33:21.210182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.288  [2024-12-16 06:33:21.210232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.288  [2024-12-16 06:33:21.210260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.288  [2024-12-16 06:33:21.214437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.288  [2024-12-16 06:33:21.214534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.288  [2024-12-16 06:33:21.214563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.288  [2024-12-16 06:33:21.218139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.288  [2024-12-16 06:33:21.218189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.288  [2024-12-16 06:33:21.218217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.288  [2024-12-16 06:33:21.222184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.288  [2024-12-16 06:33:21.222235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.288  [2024-12-16 06:33:21.222262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.288  [2024-12-16 06:33:21.226558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.288  [2024-12-16 06:33:21.226598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.288  [2024-12-16 06:33:21.226626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.288  [2024-12-16 06:33:21.230226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.288  [2024-12-16 06:33:21.230293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.288  [2024-12-16 06:33:21.230321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.288  [2024-12-16 06:33:21.234024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.288  [2024-12-16 06:33:21.234076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.288  [2024-12-16 06:33:21.234103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.288  [2024-12-16 06:33:21.237879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.288  [2024-12-16 06:33:21.237932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.288  [2024-12-16 06:33:21.237960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.288  [2024-12-16 06:33:21.241110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.288  [2024-12-16 06:33:21.241161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.288  [2024-12-16 06:33:21.241188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.288  [2024-12-16 06:33:21.244067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.288  [2024-12-16 06:33:21.244120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.288  [2024-12-16 06:33:21.244147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.288  [2024-12-16 06:33:21.247692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.288  [2024-12-16 06:33:21.247746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.288  [2024-12-16 06:33:21.247773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.288  [2024-12-16 06:33:21.251817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.288  [2024-12-16 06:33:21.251890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.288  [2024-12-16 06:33:21.251917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.288  [2024-12-16 06:33:21.256091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.288  [2024-12-16 06:33:21.256146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.288  [2024-12-16 06:33:21.256174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.548  [2024-12-16 06:33:21.260480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.548  [2024-12-16 06:33:21.260546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.548  [2024-12-16 06:33:21.260574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.548  [2024-12-16 06:33:21.264104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.548  [2024-12-16 06:33:21.264159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.549  [2024-12-16 06:33:21.264186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.549  [2024-12-16 06:33:21.267946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.549  [2024-12-16 06:33:21.268002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.549  [2024-12-16 06:33:21.268029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.549  [2024-12-16 06:33:21.272930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.549  [2024-12-16 06:33:21.272985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.549  [2024-12-16 06:33:21.273013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.549  [2024-12-16 06:33:21.276415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.549  [2024-12-16 06:33:21.276470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.549  [2024-12-16 06:33:21.276507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.549  [2024-12-16 06:33:21.280325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.549  [2024-12-16 06:33:21.280364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.549  [2024-12-16 06:33:21.280392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.549  [2024-12-16 06:33:21.282882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.549  [2024-12-16 06:33:21.282935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.549  [2024-12-16 06:33:21.282962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.549  [2024-12-16 06:33:21.286473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.549  [2024-12-16 06:33:21.286552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.549  [2024-12-16 06:33:21.286581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.549  [2024-12-16 06:33:21.290607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.549  [2024-12-16 06:33:21.290663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.549  [2024-12-16 06:33:21.290690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.549  [2024-12-16 06:33:21.294219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.549  [2024-12-16 06:33:21.294269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.549  [2024-12-16 06:33:21.294296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.549  [2024-12-16 06:33:21.298662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.549  [2024-12-16 06:33:21.298701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.549  [2024-12-16 06:33:21.298728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.549  [2024-12-16 06:33:21.303001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.549  [2024-12-16 06:33:21.303055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.549  [2024-12-16 06:33:21.303083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.549  [2024-12-16 06:33:21.307301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.549  [2024-12-16 06:33:21.307354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.549  [2024-12-16 06:33:21.307382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.549  [2024-12-16 06:33:21.310777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.549  [2024-12-16 06:33:21.310833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.549  [2024-12-16 06:33:21.310860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.549  [2024-12-16 06:33:21.315046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.549  [2024-12-16 06:33:21.315100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.549  [2024-12-16 06:33:21.315127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.549  [2024-12-16 06:33:21.318604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.549  [2024-12-16 06:33:21.318641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.549  [2024-12-16 06:33:21.318653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.549  [2024-12-16 06:33:21.322963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.549  [2024-12-16 06:33:21.323019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.549  [2024-12-16 06:33:21.323046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.549  [2024-12-16 06:33:21.327138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.549  [2024-12-16 06:33:21.327194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.549  [2024-12-16 06:33:21.327221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.549  [2024-12-16 06:33:21.331079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.549  [2024-12-16 06:33:21.331142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.549  [2024-12-16 06:33:21.331169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.549  [2024-12-16 06:33:21.335199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.549  [2024-12-16 06:33:21.335252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.549  [2024-12-16 06:33:21.335280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.549  [2024-12-16 06:33:21.338622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.549  [2024-12-16 06:33:21.338678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.549  [2024-12-16 06:33:21.338691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.549  [2024-12-16 06:33:21.342613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.549  [2024-12-16 06:33:21.342668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.549  [2024-12-16 06:33:21.342679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.549  [2024-12-16 06:33:21.346973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.549  [2024-12-16 06:33:21.347029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.549  [2024-12-16 06:33:21.347057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.549  [2024-12-16 06:33:21.351327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.549  [2024-12-16 06:33:21.351382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.549  [2024-12-16 06:33:21.351410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.549  [2024-12-16 06:33:21.355368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.549  [2024-12-16 06:33:21.355422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.549  [2024-12-16 06:33:21.355450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.549  [2024-12-16 06:33:21.359404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.549  [2024-12-16 06:33:21.359457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.549  [2024-12-16 06:33:21.359483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.549  [2024-12-16 06:33:21.363142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.549  [2024-12-16 06:33:21.363198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.549  [2024-12-16 06:33:21.363225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.549  [2024-12-16 06:33:21.366652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.550  [2024-12-16 06:33:21.366707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.550  [2024-12-16 06:33:21.366735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.550  [2024-12-16 06:33:21.370342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.550  [2024-12-16 06:33:21.370393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.550  [2024-12-16 06:33:21.370421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.550  [2024-12-16 06:33:21.373881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.550  [2024-12-16 06:33:21.373932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.550  [2024-12-16 06:33:21.373959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.550  [2024-12-16 06:33:21.377448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.550  [2024-12-16 06:33:21.377527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.550  [2024-12-16 06:33:21.377557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.550  [2024-12-16 06:33:21.381152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.550  [2024-12-16 06:33:21.381209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.550  [2024-12-16 06:33:21.381237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.550  [2024-12-16 06:33:21.384801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.550  [2024-12-16 06:33:21.384839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.550  [2024-12-16 06:33:21.384867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.550  [2024-12-16 06:33:21.388314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.550  [2024-12-16 06:33:21.388354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.550  [2024-12-16 06:33:21.388381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.550  [2024-12-16 06:33:21.391855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.550  [2024-12-16 06:33:21.391895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.550  [2024-12-16 06:33:21.391921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.550  [2024-12-16 06:33:21.396024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.550  [2024-12-16 06:33:21.396063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.550  [2024-12-16 06:33:21.396090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.550  [2024-12-16 06:33:21.399570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.550  [2024-12-16 06:33:21.399610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.550  [2024-12-16 06:33:21.399637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.550  [2024-12-16 06:33:21.402576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.550  [2024-12-16 06:33:21.402630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.550  [2024-12-16 06:33:21.402658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.550  [2024-12-16 06:33:21.406539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.550  [2024-12-16 06:33:21.406575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.550  [2024-12-16 06:33:21.406603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.550  [2024-12-16 06:33:21.410148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.550  [2024-12-16 06:33:21.410181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.550  [2024-12-16 06:33:21.410208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.550  [2024-12-16 06:33:21.413624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.550  [2024-12-16 06:33:21.413659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.550  [2024-12-16 06:33:21.413685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.550  [2024-12-16 06:33:21.418119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.550  [2024-12-16 06:33:21.418159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.550  [2024-12-16 06:33:21.418186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.550  [2024-12-16 06:33:21.422452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.550  [2024-12-16 06:33:21.422533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.550  [2024-12-16 06:33:21.422562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.550  [2024-12-16 06:33:21.426227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.550  [2024-12-16 06:33:21.426260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.550  [2024-12-16 06:33:21.426287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.550  [2024-12-16 06:33:21.429845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.550  [2024-12-16 06:33:21.429880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.550  [2024-12-16 06:33:21.429907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.550  [2024-12-16 06:33:21.433279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.550  [2024-12-16 06:33:21.433317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.550  [2024-12-16 06:33:21.433344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.550  [2024-12-16 06:33:21.437188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.550  [2024-12-16 06:33:21.437224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.550  [2024-12-16 06:33:21.437250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.550  [2024-12-16 06:33:21.440938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.550  [2024-12-16 06:33:21.440979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.550  [2024-12-16 06:33:21.441006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.550  [2024-12-16 06:33:21.444709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.550  [2024-12-16 06:33:21.444743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.550  [2024-12-16 06:33:21.444770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.550  [2024-12-16 06:33:21.448745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.550  [2024-12-16 06:33:21.448778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.550  [2024-12-16 06:33:21.448805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.550  [2024-12-16 06:33:21.452767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.550  [2024-12-16 06:33:21.452801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.550  [2024-12-16 06:33:21.452828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.550  [2024-12-16 06:33:21.456431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.550  [2024-12-16 06:33:21.456464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.550  [2024-12-16 06:33:21.456491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.550  [2024-12-16 06:33:21.460300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.550  [2024-12-16 06:33:21.460333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.551  [2024-12-16 06:33:21.460360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.551  [2024-12-16 06:33:21.463790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.551  [2024-12-16 06:33:21.463823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.551  [2024-12-16 06:33:21.463850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.551  [2024-12-16 06:33:21.467470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.551  [2024-12-16 06:33:21.467531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.551  [2024-12-16 06:33:21.467558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.551  [2024-12-16 06:33:21.471073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.551  [2024-12-16 06:33:21.471107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.551  [2024-12-16 06:33:21.471135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.551  [2024-12-16 06:33:21.474897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.551  [2024-12-16 06:33:21.474931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.551  [2024-12-16 06:33:21.474958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.551  [2024-12-16 06:33:21.478911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.551  [2024-12-16 06:33:21.478944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.551  [2024-12-16 06:33:21.478971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.551  [2024-12-16 06:33:21.482221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.551  [2024-12-16 06:33:21.482254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.551  [2024-12-16 06:33:21.482280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.551  [2024-12-16 06:33:21.486369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.551  [2024-12-16 06:33:21.486409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.551  [2024-12-16 06:33:21.486436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.551  [2024-12-16 06:33:21.489529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.551  [2024-12-16 06:33:21.489561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.551  [2024-12-16 06:33:21.489588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.551  [2024-12-16 06:33:21.493318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.551  [2024-12-16 06:33:21.493355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.551  [2024-12-16 06:33:21.493382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.551  [2024-12-16 06:33:21.496949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.551  [2024-12-16 06:33:21.496985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.551  [2024-12-16 06:33:21.497011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.551  [2024-12-16 06:33:21.500730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.551  [2024-12-16 06:33:21.500768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.551  [2024-12-16 06:33:21.500795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.551  [2024-12-16 06:33:21.504438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.551  [2024-12-16 06:33:21.504472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.551  [2024-12-16 06:33:21.504511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.551  [2024-12-16 06:33:21.507528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.551  [2024-12-16 06:33:21.507578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.551  [2024-12-16 06:33:21.507605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.551  [2024-12-16 06:33:21.511234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.551  [2024-12-16 06:33:21.511268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.551  [2024-12-16 06:33:21.511296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.551  [2024-12-16 06:33:21.514887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.551  [2024-12-16 06:33:21.514921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.551  [2024-12-16 06:33:21.514949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.551  [2024-12-16 06:33:21.518429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.551  [2024-12-16 06:33:21.518462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.551  [2024-12-16 06:33:21.518540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.811  [2024-12-16 06:33:21.522271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.811  [2024-12-16 06:33:21.522305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.811  [2024-12-16 06:33:21.522332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.811  [2024-12-16 06:33:21.526356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.811  [2024-12-16 06:33:21.526409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.811  [2024-12-16 06:33:21.526437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.811  [2024-12-16 06:33:21.530844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.811  [2024-12-16 06:33:21.530912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.530937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.534764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.534835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.534847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.538702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.538756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.538782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.542249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.542282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.542309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.545794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.545827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.545854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.549475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.549541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.549569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.553342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.553394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.553422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.557061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.557115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.557143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.561305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.561373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.561400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.565678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.565732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.565759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.569604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.569659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.569686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.573843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.573929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.573957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.577601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.577657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.577685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.581319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.581359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.581386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.586039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.586079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.586106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.590030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.590070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.590097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.594615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.594671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.594698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.597779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.597828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.597840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.601681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.601735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.601763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.605497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.605553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.605580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.609593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.609649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.609675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.613185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.613223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.613251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.617107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.617145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.617172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.620252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.620307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.620334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.624123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.624163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.624190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.627645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.627698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.627725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.631558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.631598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.812  [2024-12-16 06:33:21.631625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.812  [2024-12-16 06:33:21.635718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.812  [2024-12-16 06:33:21.635759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.635786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.639702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.639744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.639772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.644083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.644123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.644150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.648247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.648288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.648316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.652546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.652587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.652614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.656093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.656134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.656161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.660068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.660108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.660135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.664018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.664059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.664086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.667542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.667581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.667608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.670393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.670425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.670452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.674855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.674890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.674901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.678890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.678928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.678955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.681888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.681921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.681948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.685935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.685975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.686002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.689898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.689938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.689965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.693911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.693948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.693976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.697315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.697355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.697382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.701208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.701248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.701275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.704933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.704973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.704999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.708352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.708390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.708416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.712022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.712060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.712086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.715679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.715732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.715759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.719127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.719164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.719191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.723630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.723685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.723713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.726771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.726826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.726853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.731057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.731096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.731123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.734268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.734318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.734345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.738125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.813  [2024-12-16 06:33:21.738158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.813  [2024-12-16 06:33:21.738185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.813  [2024-12-16 06:33:21.742206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.814  [2024-12-16 06:33:21.742238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.814  [2024-12-16 06:33:21.742266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.814  [2024-12-16 06:33:21.745821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.814  [2024-12-16 06:33:21.745854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.814  [2024-12-16 06:33:21.745881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.814  [2024-12-16 06:33:21.749506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.814  [2024-12-16 06:33:21.749544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.814  [2024-12-16 06:33:21.749572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.814  [2024-12-16 06:33:21.753461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.814  [2024-12-16 06:33:21.753508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.814  [2024-12-16 06:33:21.753535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.814  [2024-12-16 06:33:21.757478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.814  [2024-12-16 06:33:21.757526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.814  [2024-12-16 06:33:21.757553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.814  [2024-12-16 06:33:21.761322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.814  [2024-12-16 06:33:21.761362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.814  [2024-12-16 06:33:21.761388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.814  [2024-12-16 06:33:21.765260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.814  [2024-12-16 06:33:21.765301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.814  [2024-12-16 06:33:21.765329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.814  [2024-12-16 06:33:21.768919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.814  [2024-12-16 06:33:21.768957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.814  [2024-12-16 06:33:21.768984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:04.814  [2024-12-16 06:33:21.772837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.814  [2024-12-16 06:33:21.772893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.814  [2024-12-16 06:33:21.772905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:04.814  [2024-12-16 06:33:21.776467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.814  [2024-12-16 06:33:21.776516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.814  [2024-12-16 06:33:21.776544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:04.814  [2024-12-16 06:33:21.780071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.814  [2024-12-16 06:33:21.780110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.814  [2024-12-16 06:33:21.780137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:04.814  [2024-12-16 06:33:21.784389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:04.814  [2024-12-16 06:33:21.784427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:04.814  [2024-12-16 06:33:21.784454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.074  [2024-12-16 06:33:21.788486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.074  [2024-12-16 06:33:21.788550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.074  [2024-12-16 06:33:21.788577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.074  [2024-12-16 06:33:21.792655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.074  [2024-12-16 06:33:21.792708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.074  [2024-12-16 06:33:21.792734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:05.074  [2024-12-16 06:33:21.797039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.074  [2024-12-16 06:33:21.797078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.074  [2024-12-16 06:33:21.797104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:05.074  [2024-12-16 06:33:21.800861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.074  [2024-12-16 06:33:21.800900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.074  [2024-12-16 06:33:21.800928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.074  [2024-12-16 06:33:21.804770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.074  [2024-12-16 06:33:21.804810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.074  [2024-12-16 06:33:21.804836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.074  [2024-12-16 06:33:21.808333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.074  [2024-12-16 06:33:21.808370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.074  [2024-12-16 06:33:21.808398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:05.074  [2024-12-16 06:33:21.811214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.074  [2024-12-16 06:33:21.811251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.074  [2024-12-16 06:33:21.811278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:05.074  [2024-12-16 06:33:21.815128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.074  [2024-12-16 06:33:21.815166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.074  [2024-12-16 06:33:21.815194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.074  [2024-12-16 06:33:21.819141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.074  [2024-12-16 06:33:21.819179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.074  [2024-12-16 06:33:21.819206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.074  [2024-12-16 06:33:21.822950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.074  [2024-12-16 06:33:21.822987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.074  [2024-12-16 06:33:21.823014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:05.074  [2024-12-16 06:33:21.826264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.074  [2024-12-16 06:33:21.826297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.074  [2024-12-16 06:33:21.826324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:05.075  [2024-12-16 06:33:21.830143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.075  [2024-12-16 06:33:21.830177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.075  [2024-12-16 06:33:21.830204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.075  [2024-12-16 06:33:21.834262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.075  [2024-12-16 06:33:21.834297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.075  [2024-12-16 06:33:21.834323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.075  [2024-12-16 06:33:21.837697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.075  [2024-12-16 06:33:21.837732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.075  [2024-12-16 06:33:21.837759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:05.075  [2024-12-16 06:33:21.841681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.075  [2024-12-16 06:33:21.841722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.075  [2024-12-16 06:33:21.841749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:05.075  [2024-12-16 06:33:21.846039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.075  [2024-12-16 06:33:21.846081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.075  [2024-12-16 06:33:21.846108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.075  [2024-12-16 06:33:21.849414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.075  [2024-12-16 06:33:21.849451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.075  [2024-12-16 06:33:21.849478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.075  [2024-12-16 06:33:21.853297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.075  [2024-12-16 06:33:21.853336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.075  [2024-12-16 06:33:21.853363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:05.075  [2024-12-16 06:33:21.857542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.075  [2024-12-16 06:33:21.857580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.075  [2024-12-16 06:33:21.857607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:05.075  [2024-12-16 06:33:21.860571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.075  [2024-12-16 06:33:21.860612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.075  [2024-12-16 06:33:21.860639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.075  [2024-12-16 06:33:21.864764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.075  [2024-12-16 06:33:21.864802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.075  [2024-12-16 06:33:21.864830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.075  [2024-12-16 06:33:21.867942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.075  [2024-12-16 06:33:21.867982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.075  [2024-12-16 06:33:21.868008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:05.075  [2024-12-16 06:33:21.871316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.075  [2024-12-16 06:33:21.871353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.075  [2024-12-16 06:33:21.871380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:05.075  [2024-12-16 06:33:21.875293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.075  [2024-12-16 06:33:21.875331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.075  [2024-12-16 06:33:21.875358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.075  [2024-12-16 06:33:21.879245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.075  [2024-12-16 06:33:21.879282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.075  [2024-12-16 06:33:21.879308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.075  [2024-12-16 06:33:21.882799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.075  [2024-12-16 06:33:21.882837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.075  [2024-12-16 06:33:21.882864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:05.075  [2024-12-16 06:33:21.886254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.075  [2024-12-16 06:33:21.886285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.075  [2024-12-16 06:33:21.886312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:05.075  [2024-12-16 06:33:21.890283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.075  [2024-12-16 06:33:21.890315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.075  [2024-12-16 06:33:21.890342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.075  [2024-12-16 06:33:21.894602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.075  [2024-12-16 06:33:21.894639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.075  [2024-12-16 06:33:21.894667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.075  [2024-12-16 06:33:21.898540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.075  [2024-12-16 06:33:21.898590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.075  [2024-12-16 06:33:21.898617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:05.075  [2024-12-16 06:33:21.902076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.075  [2024-12-16 06:33:21.902109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.075  [2024-12-16 06:33:21.902136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:05.075  [2024-12-16 06:33:21.905820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.075  [2024-12-16 06:33:21.905857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.075  [2024-12-16 06:33:21.905884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.075  [2024-12-16 06:33:21.909801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.075  [2024-12-16 06:33:21.909839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.075  [2024-12-16 06:33:21.909866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.075  [2024-12-16 06:33:21.913205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.075  [2024-12-16 06:33:21.913243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.075  [2024-12-16 06:33:21.913270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:05.075  [2024-12-16 06:33:21.917238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.075  [2024-12-16 06:33:21.917277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.075  [2024-12-16 06:33:21.917305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:05.075  [2024-12-16 06:33:21.920597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.075  [2024-12-16 06:33:21.920638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.075  [2024-12-16 06:33:21.920665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.075  [2024-12-16 06:33:21.924203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.075  [2024-12-16 06:33:21.924242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:21.924270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:21.927859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:21.927930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:21.927958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:21.931582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:21.931635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:21.931663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:21.935281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:21.935320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:21.935347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:21.938652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:21.938690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:21.938718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:21.942446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:21.942531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:21.942544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:21.946233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:21.946268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:21.946295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:21.950316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:21.950350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:21.950377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:21.954212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:21.954246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:21.954274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:21.957511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:21.957542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:21.957569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:21.960647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:21.960687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:21.960714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:21.964107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:21.964147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:21.964175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:21.968170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:21.968209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:21.968236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:21.972303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:21.972343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:21.972371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:21.976567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:21.976608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:21.976634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:21.980348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:21.980388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:21.980415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:21.984321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:21.984361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:21.984388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:21.988694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:21.988749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:21.988776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:21.992959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:21.992999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:21.993027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:21.996734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:21.996788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:21.996815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:22.000526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:22.000581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:22.000608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:22.004583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:22.004637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:22.004665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:22.008279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:22.008318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:22.008346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:22.012025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:22.012066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:22.012093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:22.015200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:22.015238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:22.015266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:22.019443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:22.019481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:22.019520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:22.022760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:22.022797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:22.022824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:22.026209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.076  [2024-12-16 06:33:22.026243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.076  [2024-12-16 06:33:22.026270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.076  [2024-12-16 06:33:22.030206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.077  [2024-12-16 06:33:22.030241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.077  [2024-12-16 06:33:22.030267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.077  [2024-12-16 06:33:22.034136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.077  [2024-12-16 06:33:22.034173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.077  [2024-12-16 06:33:22.034200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:05.077  [2024-12-16 06:33:22.037548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.077  [2024-12-16 06:33:22.037583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.077  [2024-12-16 06:33:22.037611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:05.077  [2024-12-16 06:33:22.041239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.077  [2024-12-16 06:33:22.041277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.077  [2024-12-16 06:33:22.041305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.077  [2024-12-16 06:33:22.044454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.077  [2024-12-16 06:33:22.044516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.077  [2024-12-16 06:33:22.044545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.336  [2024-12-16 06:33:22.048661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.336  [2024-12-16 06:33:22.048714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.336  [2024-12-16 06:33:22.048742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:05.336  [2024-12-16 06:33:22.052415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.336  [2024-12-16 06:33:22.052453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.336  [2024-12-16 06:33:22.052480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:05.336  [2024-12-16 06:33:22.056264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.336  [2024-12-16 06:33:22.056305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.336  [2024-12-16 06:33:22.056332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.336  [2024-12-16 06:33:22.060362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.336  [2024-12-16 06:33:22.060400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.336  [2024-12-16 06:33:22.060427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.336  [2024-12-16 06:33:22.063912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.336  [2024-12-16 06:33:22.063950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.336  [2024-12-16 06:33:22.063977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:05.336  [2024-12-16 06:33:22.067727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.336  [2024-12-16 06:33:22.067767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.336  [2024-12-16 06:33:22.067793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.071834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.071873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.071900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.075351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.075390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.075417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.078724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.078773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.078815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.082720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.082761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.082773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.086320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.086355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.086381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.089990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.090024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.090052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.093843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.093881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.093908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.097842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.097880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.097907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.101762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.101803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.101830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.105148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.105186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.105213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.109352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.109391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.109418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.113235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.113273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.113300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.116900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.116939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.116966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.120415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.120453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.120480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.124204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.124244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.124271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.128226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.128281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.128308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.131867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.131906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.131933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.135542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.135579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.135606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.139749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.139790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.139817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.143333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.143370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.143397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.147575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.147613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.147640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.150949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.150986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.151013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.154849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.154887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.154915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.158199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.158232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.158259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.162222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.162255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.162282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.165441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.165475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.165514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.168875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.337  [2024-12-16 06:33:22.168914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.337  [2024-12-16 06:33:22.168941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:05.337  [2024-12-16 06:33:22.172893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.338  [2024-12-16 06:33:22.172932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.338  [2024-12-16 06:33:22.172960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:05.338  [2024-12-16 06:33:22.176981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.338  [2024-12-16 06:33:22.177018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.338  [2024-12-16 06:33:22.177044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:05.338  [2024-12-16 06:33:22.181149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe097e0)
00:23:05.338  [2024-12-16 06:33:22.181189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:05.338  [2024-12-16 06:33:22.181216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:05.338  
00:23:05.338                                                                                                  Latency(us)
00:23:05.338  
[2024-12-16T06:33:22.314Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:05.338  
[2024-12-16T06:33:22.314Z]  Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072)
00:23:05.338  	 nvme0n1             :       2.00    8120.73    1015.09       0.00     0.00    1967.03     506.41    5064.15
00:23:05.338  
[2024-12-16T06:33:22.314Z]  ===================================================================================================================
00:23:05.338  
[2024-12-16T06:33:22.314Z]  Total                       :               8120.73    1015.09       0.00     0.00    1967.03     506.41    5064.15
00:23:05.338  0
00:23:05.338    06:33:22	-- host/digest.sh@71 -- # get_transient_errcount nvme0n1
00:23:05.338    06:33:22	-- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1
00:23:05.338    06:33:22	-- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1
00:23:05.338    06:33:22	-- host/digest.sh@28 -- # jq -r '.bdevs[0]
00:23:05.338  			| .driver_specific
00:23:05.338  			| .nvme_error
00:23:05.338  			| .status_code
00:23:05.338  			| .command_transient_transport_error'
00:23:05.596   06:33:22	-- host/digest.sh@71 -- # (( 524 > 0 ))
00:23:05.596   06:33:22	-- host/digest.sh@73 -- # killprocess 87080
00:23:05.596   06:33:22	-- common/autotest_common.sh@936 -- # '[' -z 87080 ']'
00:23:05.596   06:33:22	-- common/autotest_common.sh@940 -- # kill -0 87080
00:23:05.596    06:33:22	-- common/autotest_common.sh@941 -- # uname
00:23:05.596   06:33:22	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:23:05.596    06:33:22	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87080
00:23:05.596  killing process with pid 87080
00:23:05.596  Received shutdown signal, test time was about 2.000000 seconds
00:23:05.596  
00:23:05.596                                                                                                  Latency(us)
00:23:05.596  
[2024-12-16T06:33:22.572Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:05.596  
[2024-12-16T06:33:22.572Z]  ===================================================================================================================
00:23:05.596  
[2024-12-16T06:33:22.572Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:23:05.596   06:33:22	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:23:05.596   06:33:22	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:23:05.596   06:33:22	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 87080'
00:23:05.596   06:33:22	-- common/autotest_common.sh@955 -- # kill 87080
00:23:05.596   06:33:22	-- common/autotest_common.sh@960 -- # wait 87080
00:23:05.854  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:23:05.854   06:33:22	-- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128
00:23:05.854   06:33:22	-- host/digest.sh@54 -- # local rw bs qd
00:23:05.854   06:33:22	-- host/digest.sh@56 -- # rw=randwrite
00:23:05.854   06:33:22	-- host/digest.sh@56 -- # bs=4096
00:23:05.854   06:33:22	-- host/digest.sh@56 -- # qd=128
00:23:05.854   06:33:22	-- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z
00:23:05.854   06:33:22	-- host/digest.sh@58 -- # bperfpid=87176
00:23:05.854   06:33:22	-- host/digest.sh@60 -- # waitforlisten 87176 /var/tmp/bperf.sock
00:23:05.854   06:33:22	-- common/autotest_common.sh@829 -- # '[' -z 87176 ']'
00:23:05.854   06:33:22	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock
00:23:05.854   06:33:22	-- common/autotest_common.sh@834 -- # local max_retries=100
00:23:05.854   06:33:22	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:23:05.854   06:33:22	-- common/autotest_common.sh@838 -- # xtrace_disable
00:23:05.854   06:33:22	-- common/autotest_common.sh@10 -- # set +x
00:23:06.112  [2024-12-16 06:33:22.845593] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:23:06.112  [2024-12-16 06:33:22.845710] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87176 ]
00:23:06.112  [2024-12-16 06:33:22.975143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:06.112  [2024-12-16 06:33:23.068160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:23:07.047   06:33:23	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:23:07.047   06:33:23	-- common/autotest_common.sh@862 -- # return 0
00:23:07.047   06:33:23	-- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:23:07.047   06:33:23	-- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:23:07.305   06:33:24	-- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable
00:23:07.305   06:33:24	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:07.305   06:33:24	-- common/autotest_common.sh@10 -- # set +x
00:23:07.305   06:33:24	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:07.305   06:33:24	-- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:23:07.305   06:33:24	-- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:23:07.563  nvme0n1
00:23:07.563   06:33:24	-- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256
00:23:07.563   06:33:24	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:07.563   06:33:24	-- common/autotest_common.sh@10 -- # set +x
00:23:07.563   06:33:24	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:07.563   06:33:24	-- host/digest.sh@69 -- # bperf_py perform_tests
00:23:07.563   06:33:24	-- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:23:07.563  Running I/O for 2 seconds...
00:23:07.563  [2024-12-16 06:33:24.462322] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190eea00
00:23:07.563  [2024-12-16 06:33:24.462580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.563  [2024-12-16 06:33:24.462614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:23:07.563  [2024-12-16 06:33:24.471417] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e5658
00:23:07.563  [2024-12-16 06:33:24.471826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.563  [2024-12-16 06:33:24.471866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:23:07.563  [2024-12-16 06:33:24.480447] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190ee5c8
00:23:07.563  [2024-12-16 06:33:24.480836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.563  [2024-12-16 06:33:24.480874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0
00:23:07.563  [2024-12-16 06:33:24.489325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190de038
00:23:07.563  [2024-12-16 06:33:24.489645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.563  [2024-12-16 06:33:24.489707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0
00:23:07.563  [2024-12-16 06:33:24.498151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e1b48
00:23:07.563  [2024-12-16 06:33:24.498419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.563  [2024-12-16 06:33:24.498520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:23:07.563  [2024-12-16 06:33:24.507000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190ec840
00:23:07.563  [2024-12-16 06:33:24.507257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.563  [2024-12-16 06:33:24.507277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0
00:23:07.563  [2024-12-16 06:33:24.515749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190ed4e8
00:23:07.563  [2024-12-16 06:33:24.515967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.563  [2024-12-16 06:33:24.515987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0
00:23:07.563  [2024-12-16 06:33:24.526897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190ee5c8
00:23:07.563  [2024-12-16 06:33:24.527789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.563  [2024-12-16 06:33:24.527820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:23:07.563  [2024-12-16 06:33:24.533517] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190efae0
00:23:07.563  [2024-12-16 06:33:24.533684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.563  [2024-12-16 06:33:24.533704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:23:07.823  [2024-12-16 06:33:24.544119] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f31b8
00:23:07.823  [2024-12-16 06:33:24.544440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.823  [2024-12-16 06:33:24.544471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:23:07.823  [2024-12-16 06:33:24.552897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190ed920
00:23:07.823  [2024-12-16 06:33:24.553264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.823  [2024-12-16 06:33:24.553302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:23:07.823  [2024-12-16 06:33:24.561840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190ea680
00:23:07.823  [2024-12-16 06:33:24.562180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.823  [2024-12-16 06:33:24.562213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0
00:23:07.823  [2024-12-16 06:33:24.570688] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e88f8
00:23:07.823  [2024-12-16 06:33:24.571089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.823  [2024-12-16 06:33:24.571126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0
00:23:07.823  [2024-12-16 06:33:24.579461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190df118
00:23:07.823  [2024-12-16 06:33:24.579856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.823  [2024-12-16 06:33:24.579893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:23:07.823  [2024-12-16 06:33:24.588259] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e49b0
00:23:07.823  [2024-12-16 06:33:24.588744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.823  [2024-12-16 06:33:24.588781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0
00:23:07.823  [2024-12-16 06:33:24.597433] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190eff18
00:23:07.823  [2024-12-16 06:33:24.597878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.823  [2024-12-16 06:33:24.597915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0
00:23:07.823  [2024-12-16 06:33:24.606045] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190de470
00:23:07.823  [2024-12-16 06:33:24.607383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.823  [2024-12-16 06:33:24.607420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:23:07.823  [2024-12-16 06:33:24.616805] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f4298
00:23:07.823  [2024-12-16 06:33:24.617475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.823  [2024-12-16 06:33:24.617542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:23:07.823  [2024-12-16 06:33:24.624431] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fd640
00:23:07.823  [2024-12-16 06:33:24.625452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.823  [2024-12-16 06:33:24.625542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:23:07.823  [2024-12-16 06:33:24.633764] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e9e10
00:23:07.823  [2024-12-16 06:33:24.634536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.823  [2024-12-16 06:33:24.634587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0
00:23:07.823  [2024-12-16 06:33:24.642359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f57b0
00:23:07.823  [2024-12-16 06:33:24.642780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.823  [2024-12-16 06:33:24.642834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:23:07.823  [2024-12-16 06:33:24.651957] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e0a68
00:23:07.823  [2024-12-16 06:33:24.652991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.823  [2024-12-16 06:33:24.653022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:23:07.823  [2024-12-16 06:33:24.660647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fef90
00:23:07.823  [2024-12-16 06:33:24.661647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.823  [2024-12-16 06:33:24.661678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:23:07.823  [2024-12-16 06:33:24.670177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e3060
00:23:07.823  [2024-12-16 06:33:24.670709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.823  [2024-12-16 06:33:24.670748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:23:07.823  [2024-12-16 06:33:24.679037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e73e0
00:23:07.823  [2024-12-16 06:33:24.679752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.823  [2024-12-16 06:33:24.679800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:23:07.823  [2024-12-16 06:33:24.687765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fe2e8
00:23:07.823  [2024-12-16 06:33:24.688419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.823  [2024-12-16 06:33:24.688510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:23:07.823  [2024-12-16 06:33:24.696618] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f7538
00:23:07.823  [2024-12-16 06:33:24.697230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.823  [2024-12-16 06:33:24.697295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:23:07.823  [2024-12-16 06:33:24.705312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f31b8
00:23:07.823  [2024-12-16 06:33:24.705834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.823  [2024-12-16 06:33:24.705871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:23:07.823  [2024-12-16 06:33:24.713958] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190ed4e8
00:23:07.823  [2024-12-16 06:33:24.714666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.823  [2024-12-16 06:33:24.714703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:23:07.823  [2024-12-16 06:33:24.722732] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f31b8
00:23:07.823  [2024-12-16 06:33:24.724193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.823  [2024-12-16 06:33:24.724226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:23:07.823  [2024-12-16 06:33:24.731351] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f6458
00:23:07.823  [2024-12-16 06:33:24.732471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.823  [2024-12-16 06:33:24.732545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:23:07.823  [2024-12-16 06:33:24.740672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190feb58
00:23:07.823  [2024-12-16 06:33:24.741350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.823  [2024-12-16 06:33:24.741399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:23:07.823  [2024-12-16 06:33:24.748359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f8a50
00:23:07.823  [2024-12-16 06:33:24.748584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.823  [2024-12-16 06:33:24.748604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0
00:23:07.823  [2024-12-16 06:33:24.758257] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e9e10
00:23:07.823  [2024-12-16 06:33:24.758834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.823  [2024-12-16 06:33:24.758874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0
00:23:07.823  [2024-12-16 06:33:24.766709] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f0bc0
00:23:07.823  [2024-12-16 06:33:24.767894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.824  [2024-12-16 06:33:24.767926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0
00:23:07.824  [2024-12-16 06:33:24.775163] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f2948
00:23:07.824  [2024-12-16 06:33:24.775262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.824  [2024-12-16 06:33:24.775282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0
00:23:07.824  [2024-12-16 06:33:24.784121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e1b48
00:23:07.824  [2024-12-16 06:33:24.784364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.824  [2024-12-16 06:33:24.784384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0
00:23:07.824  [2024-12-16 06:33:24.793054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f46d0
00:23:07.824  [2024-12-16 06:33:24.793279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:07.824  [2024-12-16 06:33:24.793298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:23:08.083  [2024-12-16 06:33:24.802578] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f3e60
00:23:08.083  [2024-12-16 06:33:24.802668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.083  [2024-12-16 06:33:24.802689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0034 p:0 m:0 dnr:0
00:23:08.083  [2024-12-16 06:33:24.813161] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190ebb98
00:23:08.083  [2024-12-16 06:33:24.814736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.083  [2024-12-16 06:33:24.814776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:08.083  [2024-12-16 06:33:24.822411] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f2d80
00:23:08.083  [2024-12-16 06:33:24.824316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.083  [2024-12-16 06:33:24.824365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:23:08.083  [2024-12-16 06:33:24.831961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f6890
00:23:08.083  [2024-12-16 06:33:24.833188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.083  [2024-12-16 06:33:24.833236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:23:08.083  [2024-12-16 06:33:24.841077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fb8b8
00:23:08.083  [2024-12-16 06:33:24.842553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.083  [2024-12-16 06:33:24.842605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:23:08.083  [2024-12-16 06:33:24.850304] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fbcf0
00:23:08.083  [2024-12-16 06:33:24.850932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.083  [2024-12-16 06:33:24.850967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:23:08.083  [2024-12-16 06:33:24.859342] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f2d80
00:23:08.083  [2024-12-16 06:33:24.860110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.083  [2024-12-16 06:33:24.860160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:08.083  [2024-12-16 06:33:24.868105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f4f40
00:23:08.083  [2024-12-16 06:33:24.868863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.083  [2024-12-16 06:33:24.868910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:23:08.083  [2024-12-16 06:33:24.876963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f7538
00:23:08.083  [2024-12-16 06:33:24.877722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.083  [2024-12-16 06:33:24.877770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:23:08.083  [2024-12-16 06:33:24.884700] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f4298
00:23:08.083  [2024-12-16 06:33:24.885020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.083  [2024-12-16 06:33:24.885049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:08.083  [2024-12-16 06:33:24.896087] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fd208
00:23:08.083  [2024-12-16 06:33:24.896919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:90 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.083  [2024-12-16 06:33:24.896980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:23:08.083  [2024-12-16 06:33:24.903858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f8e88
00:23:08.083  [2024-12-16 06:33:24.905327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.083  [2024-12-16 06:33:24.905376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:23:08.083  [2024-12-16 06:33:24.912537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f5378
00:23:08.083  [2024-12-16 06:33:24.913852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.083  [2024-12-16 06:33:24.913901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:23:08.083  [2024-12-16 06:33:24.921218] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e4de8
00:23:08.083  [2024-12-16 06:33:24.921680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.083  [2024-12-16 06:33:24.921715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:23:08.083  [2024-12-16 06:33:24.929445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e73e0
00:23:08.083  [2024-12-16 06:33:24.929645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.083  [2024-12-16 06:33:24.929665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0012 p:0 m:0 dnr:0
00:23:08.083  [2024-12-16 06:33:24.940426] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f9f68
00:23:08.083  [2024-12-16 06:33:24.941021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.083  [2024-12-16 06:33:24.941055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:23:08.083  [2024-12-16 06:33:24.948058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190edd58
00:23:08.083  [2024-12-16 06:33:24.948983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.083  [2024-12-16 06:33:24.949030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:23:08.083  [2024-12-16 06:33:24.957903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190de8a8
00:23:08.083  [2024-12-16 06:33:24.959029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.083  [2024-12-16 06:33:24.959082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:08.083  [2024-12-16 06:33:24.966631] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e3060
00:23:08.083  [2024-12-16 06:33:24.967733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.083  [2024-12-16 06:33:24.967781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:23:08.083  [2024-12-16 06:33:24.974795] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f8e88
00:23:08.083  [2024-12-16 06:33:24.975970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.083  [2024-12-16 06:33:24.976016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:23:08.083  [2024-12-16 06:33:24.984915] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190dece0
00:23:08.083  [2024-12-16 06:33:24.985449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.083  [2024-12-16 06:33:24.985497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:23:08.083  [2024-12-16 06:33:24.992847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f1430
00:23:08.083  [2024-12-16 06:33:24.994027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.083  [2024-12-16 06:33:24.994075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:23:08.083  [2024-12-16 06:33:25.001320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fdeb0
00:23:08.083  [2024-12-16 06:33:25.001488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.083  [2024-12-16 06:33:25.001508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:23:08.083  [2024-12-16 06:33:25.010644] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f31b8
00:23:08.083  [2024-12-16 06:33:25.011497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.084  [2024-12-16 06:33:25.011568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0
00:23:08.084  [2024-12-16 06:33:25.019573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e4de8
00:23:08.084  [2024-12-16 06:33:25.020404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.084  [2024-12-16 06:33:25.020452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0
00:23:08.084  [2024-12-16 06:33:25.028538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e4de8
00:23:08.084  [2024-12-16 06:33:25.029375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.084  [2024-12-16 06:33:25.029407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:23:08.084  [2024-12-16 06:33:25.038240] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fa3a0
00:23:08.084  [2024-12-16 06:33:25.039629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.084  [2024-12-16 06:33:25.039679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:23:08.084  [2024-12-16 06:33:25.047435] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fd640
00:23:08.084  [2024-12-16 06:33:25.048321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.084  [2024-12-16 06:33:25.048369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:23:08.084  [2024-12-16 06:33:25.056365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f96f8
00:23:08.342  [2024-12-16 06:33:25.057750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.342  [2024-12-16 06:33:25.057799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:23:08.342  [2024-12-16 06:33:25.065739] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190df550
00:23:08.342  [2024-12-16 06:33:25.066943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.342  [2024-12-16 06:33:25.066976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:23:08.342  [2024-12-16 06:33:25.074542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f3a28
00:23:08.342  [2024-12-16 06:33:25.075935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.342  [2024-12-16 06:33:25.075969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0
00:23:08.342  [2024-12-16 06:33:25.083177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e7c50
00:23:08.342  [2024-12-16 06:33:25.084216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.342  [2024-12-16 06:33:25.084247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0
00:23:08.342  [2024-12-16 06:33:25.092271] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fcdd0
00:23:08.342  [2024-12-16 06:33:25.092990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.342  [2024-12-16 06:33:25.093038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:23:08.342  [2024-12-16 06:33:25.101014] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f0bc0
00:23:08.342  [2024-12-16 06:33:25.101407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.342  [2024-12-16 06:33:25.101444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:23:08.342  [2024-12-16 06:33:25.109790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f2510
00:23:08.342  [2024-12-16 06:33:25.110169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.342  [2024-12-16 06:33:25.110205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:23:08.342  [2024-12-16 06:33:25.118566] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fc998
00:23:08.342  [2024-12-16 06:33:25.118990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.342  [2024-12-16 06:33:25.119026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:08.342  [2024-12-16 06:33:25.127382] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e9e10
00:23:08.342  [2024-12-16 06:33:25.127886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.342  [2024-12-16 06:33:25.127922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:23:08.342  [2024-12-16 06:33:25.136122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190eee38
00:23:08.342  [2024-12-16 06:33:25.137165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.342  [2024-12-16 06:33:25.137213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:23:08.342  [2024-12-16 06:33:25.145189] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190ebb98
00:23:08.342  [2024-12-16 06:33:25.146495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.342  [2024-12-16 06:33:25.146568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0
00:23:08.342  [2024-12-16 06:33:25.153865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e9e10
00:23:08.342  [2024-12-16 06:33:25.155157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.342  [2024-12-16 06:33:25.155189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:23:08.342  [2024-12-16 06:33:25.163573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fc998
00:23:08.342  [2024-12-16 06:33:25.164416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.343  [2024-12-16 06:33:25.164446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0
00:23:08.343  [2024-12-16 06:33:25.171671] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f92c0
00:23:08.343  [2024-12-16 06:33:25.171912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.343  [2024-12-16 06:33:25.171947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0
00:23:08.343  [2024-12-16 06:33:25.180572] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fc128
00:23:08.343  [2024-12-16 06:33:25.181559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.343  [2024-12-16 06:33:25.181617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:23:08.343  [2024-12-16 06:33:25.190000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e3498
00:23:08.343  [2024-12-16 06:33:25.190513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.343  [2024-12-16 06:33:25.190549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:23:08.343  [2024-12-16 06:33:25.198845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190ebfd0
00:23:08.343  [2024-12-16 06:33:25.199541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.343  [2024-12-16 06:33:25.199601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:23:08.343  [2024-12-16 06:33:25.206879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e49b0
00:23:08.343  [2024-12-16 06:33:25.208394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.343  [2024-12-16 06:33:25.208426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:23:08.343  [2024-12-16 06:33:25.215692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f4298
00:23:08.343  [2024-12-16 06:33:25.217277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.343  [2024-12-16 06:33:25.217310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:23:08.343  [2024-12-16 06:33:25.224580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e99d8
00:23:08.343  [2024-12-16 06:33:25.226065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.343  [2024-12-16 06:33:25.226097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:23:08.343  [2024-12-16 06:33:25.233356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e88f8
00:23:08.343  [2024-12-16 06:33:25.234995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.343  [2024-12-16 06:33:25.235027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:23:08.343  [2024-12-16 06:33:25.242961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e99d8
00:23:08.343  [2024-12-16 06:33:25.243782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.343  [2024-12-16 06:33:25.243830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:08.343  [2024-12-16 06:33:25.251872] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e88f8
00:23:08.343  [2024-12-16 06:33:25.252803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.343  [2024-12-16 06:33:25.252833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:23:08.343  [2024-12-16 06:33:25.260678] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e7c50
00:23:08.343  [2024-12-16 06:33:25.261532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.343  [2024-12-16 06:33:25.261591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:23:08.343  [2024-12-16 06:33:25.269514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f6458
00:23:08.343  [2024-12-16 06:33:25.270362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.343  [2024-12-16 06:33:25.270393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:23:08.343  [2024-12-16 06:33:25.278304] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fa7d8
00:23:08.343  [2024-12-16 06:33:25.279232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.343  [2024-12-16 06:33:25.279264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:23:08.343  [2024-12-16 06:33:25.286670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f7100
00:23:08.343  [2024-12-16 06:33:25.287178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.343  [2024-12-16 06:33:25.287215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0
00:23:08.343  [2024-12-16 06:33:25.295982] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fdeb0
00:23:08.343  [2024-12-16 06:33:25.296579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.343  [2024-12-16 06:33:25.296613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0
00:23:08.343  [2024-12-16 06:33:25.305970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f1868
00:23:08.343  [2024-12-16 06:33:25.307543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.343  [2024-12-16 06:33:25.307605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:23:08.343  [2024-12-16 06:33:25.314285] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190eb760
00:23:08.343  [2024-12-16 06:33:25.315491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.343  [2024-12-16 06:33:25.315554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:23:08.601  [2024-12-16 06:33:25.323949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f1430
00:23:08.601  [2024-12-16 06:33:25.324344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.601  [2024-12-16 06:33:25.324379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:23:08.601  [2024-12-16 06:33:25.333471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e27f0
00:23:08.601  [2024-12-16 06:33:25.334272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.601  [2024-12-16 06:33:25.334336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:23:08.601  [2024-12-16 06:33:25.342174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fd208
00:23:08.601  [2024-12-16 06:33:25.343525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.601  [2024-12-16 06:33:25.343587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0
00:23:08.601  [2024-12-16 06:33:25.350854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fb048
00:23:08.601  [2024-12-16 06:33:25.352037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.601  [2024-12-16 06:33:25.352069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0
00:23:08.601  [2024-12-16 06:33:25.360733] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e5658
00:23:08.601  [2024-12-16 06:33:25.362095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.601  [2024-12-16 06:33:25.362127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:23:08.601  [2024-12-16 06:33:25.371615] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e8d30
00:23:08.601  [2024-12-16 06:33:25.372640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.601  [2024-12-16 06:33:25.372670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:23:08.601  [2024-12-16 06:33:25.378147] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f0788
00:23:08.601  [2024-12-16 06:33:25.378462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.601  [2024-12-16 06:33:25.378539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:08.601  [2024-12-16 06:33:25.389233] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fa7d8
00:23:08.602  [2024-12-16 06:33:25.390052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.602  [2024-12-16 06:33:25.390100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:23:08.602  [2024-12-16 06:33:25.395855] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fb048
00:23:08.602  [2024-12-16 06:33:25.395956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.602  [2024-12-16 06:33:25.395977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:23:08.602  [2024-12-16 06:33:25.406852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e6300
00:23:08.602  [2024-12-16 06:33:25.407460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.602  [2024-12-16 06:33:25.407523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0
00:23:08.602  [2024-12-16 06:33:25.415931] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fbcf0
00:23:08.602  [2024-12-16 06:33:25.416976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.602  [2024-12-16 06:33:25.417008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:23:08.602  [2024-12-16 06:33:25.424474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f6cc8
00:23:08.602  [2024-12-16 06:33:25.425971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.602  [2024-12-16 06:33:25.426019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:23:08.602  [2024-12-16 06:33:25.433099] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f0350
00:23:08.602  [2024-12-16 06:33:25.434229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.602  [2024-12-16 06:33:25.434275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0
00:23:08.602  [2024-12-16 06:33:25.442349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e0630
00:23:08.602  [2024-12-16 06:33:25.443087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.602  [2024-12-16 06:33:25.443137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:23:08.602  [2024-12-16 06:33:25.449852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190ebfd0
00:23:08.602  [2024-12-16 06:33:25.449973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.602  [2024-12-16 06:33:25.449993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0
00:23:08.602  [2024-12-16 06:33:25.458882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f8e88
00:23:08.602  [2024-12-16 06:33:25.459371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.602  [2024-12-16 06:33:25.459406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0
00:23:08.602  [2024-12-16 06:33:25.468568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f2510
00:23:08.602  [2024-12-16 06:33:25.469982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.602  [2024-12-16 06:33:25.470030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:23:08.602  [2024-12-16 06:33:25.477629] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f4b08
00:23:08.602  [2024-12-16 06:33:25.478007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.602  [2024-12-16 06:33:25.478041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:23:08.602  [2024-12-16 06:33:25.486479] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190eb760
00:23:08.602  [2024-12-16 06:33:25.487043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.602  [2024-12-16 06:33:25.487092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:23:08.602  [2024-12-16 06:33:25.495239] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e0a68
00:23:08.602  [2024-12-16 06:33:25.495761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.602  [2024-12-16 06:33:25.495796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0
00:23:08.602  [2024-12-16 06:33:25.504003] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f0ff8
00:23:08.602  [2024-12-16 06:33:25.504494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.602  [2024-12-16 06:33:25.504539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0
00:23:08.602  [2024-12-16 06:33:25.512749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f7100
00:23:08.602  [2024-12-16 06:33:25.513303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.602  [2024-12-16 06:33:25.513353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0
00:23:08.602  [2024-12-16 06:33:25.520453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fa7d8
00:23:08.602  [2024-12-16 06:33:25.520569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.602  [2024-12-16 06:33:25.520589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0
00:23:08.602  [2024-12-16 06:33:25.531444] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f8618
00:23:08.602  [2024-12-16 06:33:25.531949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.602  [2024-12-16 06:33:25.531986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:23:08.602  [2024-12-16 06:33:25.540316] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f4b08
00:23:08.602  [2024-12-16 06:33:25.541028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.602  [2024-12-16 06:33:25.541075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:23:08.602  [2024-12-16 06:33:25.549056] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f7538
00:23:08.602  [2024-12-16 06:33:25.549739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.602  [2024-12-16 06:33:25.549786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:23:08.602  [2024-12-16 06:33:25.557818] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f81e0
00:23:08.602  [2024-12-16 06:33:25.558425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.602  [2024-12-16 06:33:25.558458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:23:08.602  [2024-12-16 06:33:25.566598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e8d30
00:23:08.602  [2024-12-16 06:33:25.567194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.602  [2024-12-16 06:33:25.567228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:23:08.602  [2024-12-16 06:33:25.575424] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f4f40
00:23:08.861  [2024-12-16 06:33:25.576578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.861  [2024-12-16 06:33:25.576619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:23:08.861  [2024-12-16 06:33:25.584996] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f20d8
00:23:08.861  [2024-12-16 06:33:25.585462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.861  [2024-12-16 06:33:25.585506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0
00:23:08.861  [2024-12-16 06:33:25.594173] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f2510
00:23:08.861  [2024-12-16 06:33:25.595080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.861  [2024-12-16 06:33:25.595131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:23:08.861  [2024-12-16 06:33:25.602876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190ee190
00:23:08.861  [2024-12-16 06:33:25.604274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.861  [2024-12-16 06:33:25.604308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:23:08.861  [2024-12-16 06:33:25.611468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f96f8
00:23:08.861  [2024-12-16 06:33:25.612512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.861  [2024-12-16 06:33:25.612553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0
00:23:08.861  [2024-12-16 06:33:25.620700] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190ee5c8
00:23:08.861  [2024-12-16 06:33:25.621232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.861  [2024-12-16 06:33:25.621282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0
00:23:08.861  [2024-12-16 06:33:25.629659] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190de470
00:23:08.861  [2024-12-16 06:33:25.631169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.861  [2024-12-16 06:33:25.631202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:23:08.861  [2024-12-16 06:33:25.638269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190ee190
00:23:08.861  [2024-12-16 06:33:25.639415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.861  [2024-12-16 06:33:25.639465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:08.861  [2024-12-16 06:33:25.647346] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fd208
00:23:08.861  [2024-12-16 06:33:25.647949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.861  [2024-12-16 06:33:25.648000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:23:08.861  [2024-12-16 06:33:25.656703] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f6458
00:23:08.861  [2024-12-16 06:33:25.657423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.861  [2024-12-16 06:33:25.657485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0
00:23:08.861  [2024-12-16 06:33:25.666005] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f5be8
00:23:08.861  [2024-12-16 06:33:25.666712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.861  [2024-12-16 06:33:25.666767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:23:08.861  [2024-12-16 06:33:25.675023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e1f80
00:23:08.861  [2024-12-16 06:33:25.675513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.861  [2024-12-16 06:33:25.675559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:23:08.861  [2024-12-16 06:33:25.683623] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f3e60
00:23:08.861  [2024-12-16 06:33:25.684399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.861  [2024-12-16 06:33:25.684430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:23:08.861  [2024-12-16 06:33:25.692557] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fda78
00:23:08.861  [2024-12-16 06:33:25.693818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.861  [2024-12-16 06:33:25.693866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:23:08.861  [2024-12-16 06:33:25.701984] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e8d30
00:23:08.861  [2024-12-16 06:33:25.702388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.861  [2024-12-16 06:33:25.702425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:23:08.861  [2024-12-16 06:33:25.710852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e23b8
00:23:08.861  [2024-12-16 06:33:25.711716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.861  [2024-12-16 06:33:25.711747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0
00:23:08.861  [2024-12-16 06:33:25.719667] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f2510
00:23:08.861  [2024-12-16 06:33:25.720208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.861  [2024-12-16 06:33:25.720244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:23:08.861  [2024-12-16 06:33:25.728410] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190eff18
00:23:08.861  [2024-12-16 06:33:25.728940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.861  [2024-12-16 06:33:25.728977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:23:08.861  [2024-12-16 06:33:25.737139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fd208
00:23:08.861  [2024-12-16 06:33:25.737677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.861  [2024-12-16 06:33:25.737725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:23:08.861  [2024-12-16 06:33:25.745922] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e3498
00:23:08.861  [2024-12-16 06:33:25.746421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.861  [2024-12-16 06:33:25.746457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:23:08.861  [2024-12-16 06:33:25.754964] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e88f8
00:23:08.861  [2024-12-16 06:33:25.755915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.861  [2024-12-16 06:33:25.755946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:23:08.861  [2024-12-16 06:33:25.763297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e4de8
00:23:08.861  [2024-12-16 06:33:25.764317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.861  [2024-12-16 06:33:25.764365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:23:08.861  [2024-12-16 06:33:25.772141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190de8a8
00:23:08.861  [2024-12-16 06:33:25.773277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.862  [2024-12-16 06:33:25.773308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:23:08.862  [2024-12-16 06:33:25.781169] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190feb58
00:23:08.862  [2024-12-16 06:33:25.781556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.862  [2024-12-16 06:33:25.781605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0
00:23:08.862  [2024-12-16 06:33:25.791194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f81e0
00:23:08.862  [2024-12-16 06:33:25.791931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.862  [2024-12-16 06:33:25.791980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:23:08.862  [2024-12-16 06:33:25.798564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e9e10
00:23:08.862  [2024-12-16 06:33:25.799550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.862  [2024-12-16 06:33:25.799608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:23:08.862  [2024-12-16 06:33:25.807935] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e88f8
00:23:08.862  [2024-12-16 06:33:25.808440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.862  [2024-12-16 06:33:25.808496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:23:08.862  [2024-12-16 06:33:25.817320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f6890
00:23:08.862  [2024-12-16 06:33:25.817943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.862  [2024-12-16 06:33:25.817977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:23:08.862  [2024-12-16 06:33:25.827037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190ebb98
00:23:08.862  [2024-12-16 06:33:25.828489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:08.862  [2024-12-16 06:33:25.828545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:23:09.121  [2024-12-16 06:33:25.837222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f8e88
00:23:09.121  [2024-12-16 06:33:25.838111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.121  [2024-12-16 06:33:25.838174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:23:09.121  [2024-12-16 06:33:25.845364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190eff18
00:23:09.121  [2024-12-16 06:33:25.846617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.121  [2024-12-16 06:33:25.846671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:23:09.121  [2024-12-16 06:33:25.855533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f92c0
00:23:09.121  [2024-12-16 06:33:25.856413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.121  [2024-12-16 06:33:25.856443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:23:09.121  [2024-12-16 06:33:25.862163] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e5220
00:23:09.121  [2024-12-16 06:33:25.862326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.121  [2024-12-16 06:33:25.862345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:23:09.121  [2024-12-16 06:33:25.872047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190ec840
00:23:09.121  [2024-12-16 06:33:25.872577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.121  [2024-12-16 06:33:25.872614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:23:09.121  [2024-12-16 06:33:25.881021] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e7c50
00:23:09.121  [2024-12-16 06:33:25.882051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.121  [2024-12-16 06:33:25.882081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:23:09.121  [2024-12-16 06:33:25.891023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f7100
00:23:09.121  [2024-12-16 06:33:25.891826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.121  [2024-12-16 06:33:25.891855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:09.121  [2024-12-16 06:33:25.898809] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e1710
00:23:09.121  [2024-12-16 06:33:25.900269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.121  [2024-12-16 06:33:25.900301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:23:09.121  [2024-12-16 06:33:25.908058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fd640
00:23:09.121  [2024-12-16 06:33:25.908735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.121  [2024-12-16 06:33:25.908783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0
00:23:09.121  [2024-12-16 06:33:25.916888] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190ebb98
00:23:09.121  [2024-12-16 06:33:25.917302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.121  [2024-12-16 06:33:25.917337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:23:09.121  [2024-12-16 06:33:25.924520] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190df988
00:23:09.121  [2024-12-16 06:33:25.924693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.121  [2024-12-16 06:33:25.924713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0
00:23:09.121  [2024-12-16 06:33:25.934304] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190ef270
00:23:09.121  [2024-12-16 06:33:25.934882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.121  [2024-12-16 06:33:25.934950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:09.121  [2024-12-16 06:33:25.942910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f6020
00:23:09.121  [2024-12-16 06:33:25.943703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.121  [2024-12-16 06:33:25.943749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:23:09.121  [2024-12-16 06:33:25.951645] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f6890
00:23:09.121  [2024-12-16 06:33:25.952925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.121  [2024-12-16 06:33:25.952957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:23:09.121  [2024-12-16 06:33:25.960635] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190ea680
00:23:09.121  [2024-12-16 06:33:25.960973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.121  [2024-12-16 06:33:25.961003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0
00:23:09.121  [2024-12-16 06:33:25.969507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190eee38
00:23:09.121  [2024-12-16 06:33:25.970009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.121  [2024-12-16 06:33:25.970044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:23:09.121  [2024-12-16 06:33:25.978252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e7c50
00:23:09.121  [2024-12-16 06:33:25.978759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.121  [2024-12-16 06:33:25.978812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:23:09.121  [2024-12-16 06:33:25.987088] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f35f0
00:23:09.121  [2024-12-16 06:33:25.987538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.121  [2024-12-16 06:33:25.987586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0
00:23:09.121  [2024-12-16 06:33:25.995844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f8e88
00:23:09.121  [2024-12-16 06:33:25.996272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.121  [2024-12-16 06:33:25.996308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0
00:23:09.121  [2024-12-16 06:33:26.004630] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e2c28
00:23:09.121  [2024-12-16 06:33:26.005028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.121  [2024-12-16 06:33:26.005064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:23:09.121  [2024-12-16 06:33:26.013401] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e0a68
00:23:09.121  [2024-12-16 06:33:26.013852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.121  [2024-12-16 06:33:26.013889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0
00:23:09.121  [2024-12-16 06:33:26.022596] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f6020
00:23:09.121  [2024-12-16 06:33:26.024179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.121  [2024-12-16 06:33:26.024228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:23:09.122  [2024-12-16 06:33:26.033875] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fd208
00:23:09.122  [2024-12-16 06:33:26.034908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.122  [2024-12-16 06:33:26.034957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:23:09.122  [2024-12-16 06:33:26.040753] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fb048
00:23:09.122  [2024-12-16 06:33:26.040962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.122  [2024-12-16 06:33:26.040982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0
00:23:09.122  [2024-12-16 06:33:26.050709] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f3a28
00:23:09.122  [2024-12-16 06:33:26.051281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.122  [2024-12-16 06:33:26.051319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0
00:23:09.122  [2024-12-16 06:33:26.058679] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190df988
00:23:09.122  [2024-12-16 06:33:26.058788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.122  [2024-12-16 06:33:26.058819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:23:09.122  [2024-12-16 06:33:26.069722] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e3498
00:23:09.122  [2024-12-16 06:33:26.070223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.122  [2024-12-16 06:33:26.070259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:23:09.122  [2024-12-16 06:33:26.078726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e9168
00:23:09.122  [2024-12-16 06:33:26.079453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.122  [2024-12-16 06:33:26.079540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:23:09.122  [2024-12-16 06:33:26.087483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e5658
00:23:09.122  [2024-12-16 06:33:26.088138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.122  [2024-12-16 06:33:26.088200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:23:09.381  [2024-12-16 06:33:26.096907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e73e0
00:23:09.381  [2024-12-16 06:33:26.097512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.381  [2024-12-16 06:33:26.097613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:23:09.381  [2024-12-16 06:33:26.105377] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190feb58
00:23:09.381  [2024-12-16 06:33:26.106451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.381  [2024-12-16 06:33:26.106546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:23:09.381  [2024-12-16 06:33:26.115590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f5be8
00:23:09.381  [2024-12-16 06:33:26.116184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.381  [2024-12-16 06:33:26.116217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:23:09.381  [2024-12-16 06:33:26.124458] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f4b08
00:23:09.381  [2024-12-16 06:33:26.125017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.381  [2024-12-16 06:33:26.125053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0
00:23:09.381  [2024-12-16 06:33:26.133394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f7100
00:23:09.381  [2024-12-16 06:33:26.134151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.381  [2024-12-16 06:33:26.134198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:23:09.381  [2024-12-16 06:33:26.141508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190ebfd0
00:23:09.381  [2024-12-16 06:33:26.143053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.381  [2024-12-16 06:33:26.143104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:23:09.381  [2024-12-16 06:33:26.150303] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e6b70
00:23:09.381  [2024-12-16 06:33:26.152076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.381  [2024-12-16 06:33:26.152124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:23:09.381  [2024-12-16 06:33:26.158574] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e3060
00:23:09.381  [2024-12-16 06:33:26.159756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.381  [2024-12-16 06:33:26.159802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0
00:23:09.381  [2024-12-16 06:33:26.169520] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190dfdc0
00:23:09.381  [2024-12-16 06:33:26.170056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.381  [2024-12-16 06:33:26.170104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:23:09.381  [2024-12-16 06:33:26.178210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fa3a0
00:23:09.381  [2024-12-16 06:33:26.178966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.381  [2024-12-16 06:33:26.179017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:23:09.381  [2024-12-16 06:33:26.186850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f7da8
00:23:09.381  [2024-12-16 06:33:26.187840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.381  [2024-12-16 06:33:26.187887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:23:09.381  [2024-12-16 06:33:26.194917] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190efae0
00:23:09.381  [2024-12-16 06:33:26.196098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.381  [2024-12-16 06:33:26.196145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0
00:23:09.381  [2024-12-16 06:33:26.206093] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e6fa8
00:23:09.381  [2024-12-16 06:33:26.206812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.381  [2024-12-16 06:33:26.206862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:23:09.381  [2024-12-16 06:33:26.215258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f0bc0
00:23:09.381  [2024-12-16 06:33:26.216418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.381  [2024-12-16 06:33:26.216464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:09.381  [2024-12-16 06:33:26.223820] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f8618
00:23:09.381  [2024-12-16 06:33:26.224997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.381  [2024-12-16 06:33:26.225048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:23:09.381  [2024-12-16 06:33:26.233032] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f31b8
00:23:09.381  [2024-12-16 06:33:26.233824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.381  [2024-12-16 06:33:26.233872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:23:09.381  [2024-12-16 06:33:26.240527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e1710
00:23:09.381  [2024-12-16 06:33:26.240709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.381  [2024-12-16 06:33:26.240729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0
00:23:09.381  [2024-12-16 06:33:26.250824] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e7818
00:23:09.381  [2024-12-16 06:33:26.252325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.381  [2024-12-16 06:33:26.252358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:23:09.381  [2024-12-16 06:33:26.260699] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f2510
00:23:09.381  [2024-12-16 06:33:26.261788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.381  [2024-12-16 06:33:26.261837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:23:09.381  [2024-12-16 06:33:26.267930] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f6cc8
00:23:09.381  [2024-12-16 06:33:26.268965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.381  [2024-12-16 06:33:26.268995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:23:09.381  [2024-12-16 06:33:26.276816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fd640
00:23:09.381  [2024-12-16 06:33:26.277809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.381  [2024-12-16 06:33:26.277858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:23:09.381  [2024-12-16 06:33:26.286792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f9b30
00:23:09.381  [2024-12-16 06:33:26.288269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.381  [2024-12-16 06:33:26.288301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:23:09.381  [2024-12-16 06:33:26.295001] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f7100
00:23:09.381  [2024-12-16 06:33:26.295993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.381  [2024-12-16 06:33:26.296024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:23:09.381  [2024-12-16 06:33:26.303887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f31b8
00:23:09.381  [2024-12-16 06:33:26.305043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.381  [2024-12-16 06:33:26.305074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:23:09.381  [2024-12-16 06:33:26.312930] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f2948
00:23:09.382  [2024-12-16 06:33:26.313297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.382  [2024-12-16 06:33:26.313331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0
00:23:09.382  [2024-12-16 06:33:26.324225] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f1868
00:23:09.382  [2024-12-16 06:33:26.325128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.382  [2024-12-16 06:33:26.325157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:23:09.382  [2024-12-16 06:33:26.332164] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f46d0
00:23:09.382  [2024-12-16 06:33:26.333737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.382  [2024-12-16 06:33:26.333784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:23:09.382  [2024-12-16 06:33:26.340324] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190de470
00:23:09.382  [2024-12-16 06:33:26.341736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.382  [2024-12-16 06:33:26.341784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:23:09.382  [2024-12-16 06:33:26.349584] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f2948
00:23:09.382  [2024-12-16 06:33:26.349894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.382  [2024-12-16 06:33:26.349958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0
00:23:09.640  [2024-12-16 06:33:26.360289] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190ddc00
00:23:09.640  [2024-12-16 06:33:26.361547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.640  [2024-12-16 06:33:26.361605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:23:09.640  [2024-12-16 06:33:26.369221] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e4de8
00:23:09.640  [2024-12-16 06:33:26.370251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.640  [2024-12-16 06:33:26.370283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:23:09.640  [2024-12-16 06:33:26.378294] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190eee38
00:23:09.640  [2024-12-16 06:33:26.379460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.640  [2024-12-16 06:33:26.379524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:23:09.640  [2024-12-16 06:33:26.386691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e6b70
00:23:09.640  [2024-12-16 06:33:26.387436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.640  [2024-12-16 06:33:26.387511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0
00:23:09.640  [2024-12-16 06:33:26.394780] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e6b70
00:23:09.640  [2024-12-16 06:33:26.394950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.640  [2024-12-16 06:33:26.394969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0
00:23:09.640  [2024-12-16 06:33:26.405083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190f4298
00:23:09.640  [2024-12-16 06:33:26.406134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.640  [2024-12-16 06:33:26.406165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:23:09.640  [2024-12-16 06:33:26.413637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190fc560
00:23:09.640  [2024-12-16 06:33:26.415229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.640  [2024-12-16 06:33:26.415266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:23:09.640  [2024-12-16 06:33:26.422621] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e9168
00:23:09.640  [2024-12-16 06:33:26.423107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.640  [2024-12-16 06:33:26.423143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:23:09.640  [2024-12-16 06:33:26.431476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190eb760
00:23:09.640  [2024-12-16 06:33:26.432170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.640  [2024-12-16 06:33:26.432219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:23:09.640  [2024-12-16 06:33:26.440245] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190e88f8
00:23:09.640  [2024-12-16 06:33:26.440901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.640  [2024-12-16 06:33:26.440949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:23:09.640  [2024-12-16 06:33:26.449036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16148f0) with pdu=0x2000190dece0
00:23:09.640  [2024-12-16 06:33:26.449691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:23:09.640  [2024-12-16 06:33:26.449753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:23:09.640  
00:23:09.640                                                                                                  Latency(us)
00:23:09.640  
[2024-12-16T06:33:26.616Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:09.640  
[2024-12-16T06:33:26.616Z]  Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:23:09.640  	 nvme0n1             :       2.00   28354.80     110.76       0.00     0.00    4509.12    1884.16   12332.68
00:23:09.640  
[2024-12-16T06:33:26.616Z]  ===================================================================================================================
00:23:09.640  
[2024-12-16T06:33:26.616Z]  Total                       :              28354.80     110.76       0.00     0.00    4509.12    1884.16   12332.68
00:23:09.640  0
00:23:09.640    06:33:26	-- host/digest.sh@71 -- # get_transient_errcount nvme0n1
00:23:09.641    06:33:26	-- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1
00:23:09.641    06:33:26	-- host/digest.sh@28 -- # jq -r '.bdevs[0]
00:23:09.641  			| .driver_specific
00:23:09.641  			| .nvme_error
00:23:09.641  			| .status_code
00:23:09.641  			| .command_transient_transport_error'
00:23:09.641    06:33:26	-- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1
00:23:09.899   06:33:26	-- host/digest.sh@71 -- # (( 222 > 0 ))
00:23:09.899   06:33:26	-- host/digest.sh@73 -- # killprocess 87176
00:23:09.899   06:33:26	-- common/autotest_common.sh@936 -- # '[' -z 87176 ']'
00:23:09.899   06:33:26	-- common/autotest_common.sh@940 -- # kill -0 87176
00:23:09.899    06:33:26	-- common/autotest_common.sh@941 -- # uname
00:23:09.899   06:33:26	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:23:09.899    06:33:26	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87176
00:23:09.899  killing process with pid 87176
00:23:09.899  Received shutdown signal, test time was about 2.000000 seconds
00:23:09.899  
00:23:09.899                                                                                                  Latency(us)
00:23:09.899  
[2024-12-16T06:33:26.875Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:09.899  
[2024-12-16T06:33:26.875Z]  ===================================================================================================================
00:23:09.899  
[2024-12-16T06:33:26.875Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:23:09.899   06:33:26	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:23:09.899   06:33:26	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:23:09.899   06:33:26	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 87176'
00:23:09.899   06:33:26	-- common/autotest_common.sh@955 -- # kill 87176
00:23:09.899   06:33:26	-- common/autotest_common.sh@960 -- # wait 87176
00:23:10.157   06:33:27	-- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16
00:23:10.157   06:33:27	-- host/digest.sh@54 -- # local rw bs qd
00:23:10.157   06:33:27	-- host/digest.sh@56 -- # rw=randwrite
00:23:10.157   06:33:27	-- host/digest.sh@56 -- # bs=131072
00:23:10.157   06:33:27	-- host/digest.sh@56 -- # qd=16
00:23:10.157   06:33:27	-- host/digest.sh@58 -- # bperfpid=87261
00:23:10.157   06:33:27	-- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z
00:23:10.157   06:33:27	-- host/digest.sh@60 -- # waitforlisten 87261 /var/tmp/bperf.sock
00:23:10.157   06:33:27	-- common/autotest_common.sh@829 -- # '[' -z 87261 ']'
00:23:10.157   06:33:27	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock
00:23:10.157   06:33:27	-- common/autotest_common.sh@834 -- # local max_retries=100
00:23:10.157   06:33:27	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:23:10.157  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:23:10.157   06:33:27	-- common/autotest_common.sh@838 -- # xtrace_disable
00:23:10.157   06:33:27	-- common/autotest_common.sh@10 -- # set +x
00:23:10.415  I/O size of 131072 is greater than zero copy threshold (65536).
00:23:10.415  Zero copy mechanism will not be used.
00:23:10.415  [2024-12-16 06:33:27.140741] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:23:10.415  [2024-12-16 06:33:27.140845] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87261 ]
00:23:10.415  [2024-12-16 06:33:27.270143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:10.415  [2024-12-16 06:33:27.379410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:23:11.350   06:33:28	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:23:11.350   06:33:28	-- common/autotest_common.sh@862 -- # return 0
00:23:11.350   06:33:28	-- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:23:11.350   06:33:28	-- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:23:11.350   06:33:28	-- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable
00:23:11.350   06:33:28	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:11.350   06:33:28	-- common/autotest_common.sh@10 -- # set +x
00:23:11.350   06:33:28	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:11.350   06:33:28	-- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:23:11.350   06:33:28	-- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:23:11.918  nvme0n1
00:23:11.918   06:33:28	-- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32
00:23:11.918   06:33:28	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:11.918   06:33:28	-- common/autotest_common.sh@10 -- # set +x
00:23:11.918   06:33:28	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:11.918   06:33:28	-- host/digest.sh@69 -- # bperf_py perform_tests
00:23:11.918   06:33:28	-- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:23:11.918  I/O size of 131072 is greater than zero copy threshold (65536).
00:23:11.918  Zero copy mechanism will not be used.
00:23:11.918  Running I/O for 2 seconds...
00:23:11.918  [2024-12-16 06:33:28.752400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.918  [2024-12-16 06:33:28.752854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.918  [2024-12-16 06:33:28.752900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:11.918  [2024-12-16 06:33:28.756637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.918  [2024-12-16 06:33:28.756999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.918  [2024-12-16 06:33:28.757059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:11.918  [2024-12-16 06:33:28.760764] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.918  [2024-12-16 06:33:28.760992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.918  [2024-12-16 06:33:28.761024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:11.918  [2024-12-16 06:33:28.764850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.918  [2024-12-16 06:33:28.764979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.918  [2024-12-16 06:33:28.765001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:11.918  [2024-12-16 06:33:28.768955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.918  [2024-12-16 06:33:28.769062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.918  [2024-12-16 06:33:28.769085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:11.918  [2024-12-16 06:33:28.772872] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.918  [2024-12-16 06:33:28.772997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.918  [2024-12-16 06:33:28.773019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:11.918  [2024-12-16 06:33:28.776994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.918  [2024-12-16 06:33:28.777110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.918  [2024-12-16 06:33:28.777132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:11.918  [2024-12-16 06:33:28.781004] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.918  [2024-12-16 06:33:28.781128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.918  [2024-12-16 06:33:28.781149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:11.918  [2024-12-16 06:33:28.785060] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.918  [2024-12-16 06:33:28.785222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.918  [2024-12-16 06:33:28.785244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:11.918  [2024-12-16 06:33:28.789026] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.918  [2024-12-16 06:33:28.789226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.918  [2024-12-16 06:33:28.789247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:11.918  [2024-12-16 06:33:28.793030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.918  [2024-12-16 06:33:28.793156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.918  [2024-12-16 06:33:28.793178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:11.918  [2024-12-16 06:33:28.797077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.918  [2024-12-16 06:33:28.797215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.918  [2024-12-16 06:33:28.797236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:11.918  [2024-12-16 06:33:28.800987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.918  [2024-12-16 06:33:28.801087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.918  [2024-12-16 06:33:28.801108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:11.918  [2024-12-16 06:33:28.805067] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.918  [2024-12-16 06:33:28.805211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.918  [2024-12-16 06:33:28.805232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:11.918  [2024-12-16 06:33:28.809017] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.918  [2024-12-16 06:33:28.809132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.918  [2024-12-16 06:33:28.809153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:11.918  [2024-12-16 06:33:28.812970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.918  [2024-12-16 06:33:28.813066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.918  [2024-12-16 06:33:28.813087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:11.918  [2024-12-16 06:33:28.816933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.918  [2024-12-16 06:33:28.817103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.918  [2024-12-16 06:33:28.817124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:11.918  [2024-12-16 06:33:28.821023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.918  [2024-12-16 06:33:28.821363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.918  [2024-12-16 06:33:28.821399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:11.918  [2024-12-16 06:33:28.825049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.918  [2024-12-16 06:33:28.825179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.918  [2024-12-16 06:33:28.825200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:11.918  [2024-12-16 06:33:28.829153] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.918  [2024-12-16 06:33:28.829329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.918  [2024-12-16 06:33:28.829350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:11.918  [2024-12-16 06:33:28.833181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.918  [2024-12-16 06:33:28.833354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.918  [2024-12-16 06:33:28.833375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:11.918  [2024-12-16 06:33:28.837097] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.918  [2024-12-16 06:33:28.837223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.918  [2024-12-16 06:33:28.837245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:11.918  [2024-12-16 06:33:28.841177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.918  [2024-12-16 06:33:28.841369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.918  [2024-12-16 06:33:28.841389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:11.918  [2024-12-16 06:33:28.845271] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.918  [2024-12-16 06:33:28.845367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.918  [2024-12-16 06:33:28.845389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:11.918  [2024-12-16 06:33:28.849293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.918  [2024-12-16 06:33:28.849432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.918  [2024-12-16 06:33:28.849453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:11.919  [2024-12-16 06:33:28.853219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.919  [2024-12-16 06:33:28.853364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.919  [2024-12-16 06:33:28.853385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:11.919  [2024-12-16 06:33:28.857219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.919  [2024-12-16 06:33:28.857400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.919  [2024-12-16 06:33:28.857421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:11.919  [2024-12-16 06:33:28.861189] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.919  [2024-12-16 06:33:28.861359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.919  [2024-12-16 06:33:28.861379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:11.919  [2024-12-16 06:33:28.865151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.919  [2024-12-16 06:33:28.865417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.919  [2024-12-16 06:33:28.865506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:11.919  [2024-12-16 06:33:28.869174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.919  [2024-12-16 06:33:28.869302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.919  [2024-12-16 06:33:28.869323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:11.919  [2024-12-16 06:33:28.873289] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.919  [2024-12-16 06:33:28.873421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.919  [2024-12-16 06:33:28.873442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:11.919  [2024-12-16 06:33:28.877315] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.919  [2024-12-16 06:33:28.877409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.919  [2024-12-16 06:33:28.877431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:11.919  [2024-12-16 06:33:28.881459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.919  [2024-12-16 06:33:28.881636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.919  [2024-12-16 06:33:28.881658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:11.919  [2024-12-16 06:33:28.885390] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.919  [2024-12-16 06:33:28.885541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.919  [2024-12-16 06:33:28.885562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:11.919  [2024-12-16 06:33:28.889563] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:11.919  [2024-12-16 06:33:28.889663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:11.919  [2024-12-16 06:33:28.889684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.179  [2024-12-16 06:33:28.894062] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.179  [2024-12-16 06:33:28.894223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.179  [2024-12-16 06:33:28.894244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.179  [2024-12-16 06:33:28.898312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.179  [2024-12-16 06:33:28.898524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.179  [2024-12-16 06:33:28.898546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.179  [2024-12-16 06:33:28.902319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.179  [2024-12-16 06:33:28.902445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.179  [2024-12-16 06:33:28.902467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.179  [2024-12-16 06:33:28.906445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.179  [2024-12-16 06:33:28.906635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.179  [2024-12-16 06:33:28.906657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.179  [2024-12-16 06:33:28.910449] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.179  [2024-12-16 06:33:28.910600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.179  [2024-12-16 06:33:28.910622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.179  [2024-12-16 06:33:28.914571] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.179  [2024-12-16 06:33:28.914756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.179  [2024-12-16 06:33:28.914799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.179  [2024-12-16 06:33:28.918587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.179  [2024-12-16 06:33:28.918739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.179  [2024-12-16 06:33:28.918761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.179  [2024-12-16 06:33:28.922511] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.179  [2024-12-16 06:33:28.922624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.179  [2024-12-16 06:33:28.922645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.179  [2024-12-16 06:33:28.926535] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.179  [2024-12-16 06:33:28.926722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.179  [2024-12-16 06:33:28.926745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.179  [2024-12-16 06:33:28.930599] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.179  [2024-12-16 06:33:28.930871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.179  [2024-12-16 06:33:28.930923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.179  [2024-12-16 06:33:28.934547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.179  [2024-12-16 06:33:28.934658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.179  [2024-12-16 06:33:28.934679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.179  [2024-12-16 06:33:28.938618] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.179  [2024-12-16 06:33:28.938820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.179  [2024-12-16 06:33:28.938842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.179  [2024-12-16 06:33:28.942446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.179  [2024-12-16 06:33:28.942616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.179  [2024-12-16 06:33:28.942639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.179  [2024-12-16 06:33:28.946465] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.179  [2024-12-16 06:33:28.946647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.179  [2024-12-16 06:33:28.946668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.179  [2024-12-16 06:33:28.950423] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.179  [2024-12-16 06:33:28.950644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.179  [2024-12-16 06:33:28.950677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.179  [2024-12-16 06:33:28.954367] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.179  [2024-12-16 06:33:28.954542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.179  [2024-12-16 06:33:28.954565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.179  [2024-12-16 06:33:28.958513] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.179  [2024-12-16 06:33:28.958688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.179  [2024-12-16 06:33:28.958722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.179  [2024-12-16 06:33:28.962405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.179  [2024-12-16 06:33:28.962610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.179  [2024-12-16 06:33:28.962642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.179  [2024-12-16 06:33:28.966336] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.179  [2024-12-16 06:33:28.966523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.179  [2024-12-16 06:33:28.966544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.179  [2024-12-16 06:33:28.970511] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:28.970694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:28.970715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:28.974466] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:28.974604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:28.974626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:28.978552] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:28.978702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:28.978723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:28.982500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:28.982622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:28.982643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:28.986439] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:28.986612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:28.986634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:28.990518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:28.990693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:28.990726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:28.994443] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:28.994649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:28.994681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:28.998382] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:28.998562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:28.998583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:29.002519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:29.002715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:29.002748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:29.006531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:29.006657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:29.006689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:29.010580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:29.010754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:29.010806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:29.014553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:29.014676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:29.014697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:29.018596] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:29.018735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:29.018756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:29.022640] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:29.022835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:29.022856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:29.026703] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:29.026895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:29.026926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:29.030747] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:29.030911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:29.030932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:29.034852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:29.035104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:29.035183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:29.038816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:29.039081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:29.039129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:29.042760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:29.042868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:29.042888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:29.046897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:29.047051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:29.047072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:29.050941] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:29.051057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:29.051078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:29.055066] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:29.055263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:29.055283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:29.059077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:29.059304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:29.059338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:29.063067] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:29.063169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:29.063190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:29.067063] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:29.067225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:29.067246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:29.071101] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:29.071279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:29.071300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:29.075055] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:29.075227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:29.075248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.180  [2024-12-16 06:33:29.079127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.180  [2024-12-16 06:33:29.079307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.180  [2024-12-16 06:33:29.079327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.181  [2024-12-16 06:33:29.083083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.181  [2024-12-16 06:33:29.083187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.181  [2024-12-16 06:33:29.083208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.181  [2024-12-16 06:33:29.087089] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.181  [2024-12-16 06:33:29.087251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.181  [2024-12-16 06:33:29.087273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.181  [2024-12-16 06:33:29.091077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.181  [2024-12-16 06:33:29.091179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.181  [2024-12-16 06:33:29.091200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.181  [2024-12-16 06:33:29.095039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.181  [2024-12-16 06:33:29.095129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.181  [2024-12-16 06:33:29.095150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.181  [2024-12-16 06:33:29.099060] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.181  [2024-12-16 06:33:29.099217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.181  [2024-12-16 06:33:29.099237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.181  [2024-12-16 06:33:29.103174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.181  [2024-12-16 06:33:29.103374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.181  [2024-12-16 06:33:29.103395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.181  [2024-12-16 06:33:29.107121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.181  [2024-12-16 06:33:29.107224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.181  [2024-12-16 06:33:29.107245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.181  [2024-12-16 06:33:29.111177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.181  [2024-12-16 06:33:29.111336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.181  [2024-12-16 06:33:29.111358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.181  [2024-12-16 06:33:29.115202] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.181  [2024-12-16 06:33:29.115334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.181  [2024-12-16 06:33:29.115358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.181  [2024-12-16 06:33:29.119325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.181  [2024-12-16 06:33:29.119485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.181  [2024-12-16 06:33:29.119521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.181  [2024-12-16 06:33:29.123236] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.181  [2024-12-16 06:33:29.123389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.181  [2024-12-16 06:33:29.123410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.181  [2024-12-16 06:33:29.127281] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.181  [2024-12-16 06:33:29.127406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.181  [2024-12-16 06:33:29.127426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.181  [2024-12-16 06:33:29.131383] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.181  [2024-12-16 06:33:29.131587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.181  [2024-12-16 06:33:29.131619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.181  [2024-12-16 06:33:29.135511] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.181  [2024-12-16 06:33:29.135666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.181  [2024-12-16 06:33:29.135687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.181  [2024-12-16 06:33:29.139616] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.181  [2024-12-16 06:33:29.139734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.181  [2024-12-16 06:33:29.139761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.181  [2024-12-16 06:33:29.143767] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.181  [2024-12-16 06:33:29.143914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.181  [2024-12-16 06:33:29.143935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.181  [2024-12-16 06:33:29.147750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.181  [2024-12-16 06:33:29.147848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.181  [2024-12-16 06:33:29.147869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.181  [2024-12-16 06:33:29.152184] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.181  [2024-12-16 06:33:29.152360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.181  [2024-12-16 06:33:29.152381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.441  [2024-12-16 06:33:29.156363] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.441  [2024-12-16 06:33:29.156486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.441  [2024-12-16 06:33:29.156524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.441  [2024-12-16 06:33:29.160627] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.441  [2024-12-16 06:33:29.160731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.441  [2024-12-16 06:33:29.160753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.441  [2024-12-16 06:33:29.164757] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.441  [2024-12-16 06:33:29.164960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.441  [2024-12-16 06:33:29.164981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.441  [2024-12-16 06:33:29.168661] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.441  [2024-12-16 06:33:29.168850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.441  [2024-12-16 06:33:29.168872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.441  [2024-12-16 06:33:29.172721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.441  [2024-12-16 06:33:29.172858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.441  [2024-12-16 06:33:29.172880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.441  [2024-12-16 06:33:29.176779] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.441  [2024-12-16 06:33:29.176982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.441  [2024-12-16 06:33:29.177003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.441  [2024-12-16 06:33:29.180882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.441  [2024-12-16 06:33:29.181121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.441  [2024-12-16 06:33:29.181167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.441  [2024-12-16 06:33:29.184815] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.441  [2024-12-16 06:33:29.184953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.441  [2024-12-16 06:33:29.184975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.441  [2024-12-16 06:33:29.188901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.441  [2024-12-16 06:33:29.189098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.441  [2024-12-16 06:33:29.189118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.441  [2024-12-16 06:33:29.192889] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.441  [2024-12-16 06:33:29.193040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.441  [2024-12-16 06:33:29.193061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.441  [2024-12-16 06:33:29.196999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.441  [2024-12-16 06:33:29.197170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.441  [2024-12-16 06:33:29.197191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.441  [2024-12-16 06:33:29.200957] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.441  [2024-12-16 06:33:29.201095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.201116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.204999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.205129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.205152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.209017] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.209185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.209205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.212988] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.213277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.213340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.216971] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.217065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.217086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.221004] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.221188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.221208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.225049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.225224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.225245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.228950] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.229120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.229142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.232990] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.233146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.233166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.237034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.237134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.237156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.241154] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.241310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.241330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.245165] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.245309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.245329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.249197] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.249295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.249316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.253336] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.253525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.253546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.257326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.257646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.257693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.261219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.261312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.261333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.265311] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.265486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.265521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.269325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.269628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.269692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.273313] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.273423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.273445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.277325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.277526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.277547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.281333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.281714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.281754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.285354] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.285513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.285534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.289698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.289903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.289922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.293739] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.293838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.293859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.297800] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.297893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.297913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.301963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.302096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.302118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.306066] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.306205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.306225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.310244] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.442  [2024-12-16 06:33:29.310440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.442  [2024-12-16 06:33:29.310461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.442  [2024-12-16 06:33:29.314326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.443  [2024-12-16 06:33:29.314602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.443  [2024-12-16 06:33:29.314625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.443  [2024-12-16 06:33:29.318389] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.443  [2024-12-16 06:33:29.318542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.443  [2024-12-16 06:33:29.318564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.443  [2024-12-16 06:33:29.322624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.443  [2024-12-16 06:33:29.322866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.443  [2024-12-16 06:33:29.322944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.443  [2024-12-16 06:33:29.326688] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.443  [2024-12-16 06:33:29.326889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.443  [2024-12-16 06:33:29.326910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.443  [2024-12-16 06:33:29.330774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.443  [2024-12-16 06:33:29.330909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.443  [2024-12-16 06:33:29.330929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.443  [2024-12-16 06:33:29.335022] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.443  [2024-12-16 06:33:29.335151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.443  [2024-12-16 06:33:29.335172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.443  [2024-12-16 06:33:29.339108] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.443  [2024-12-16 06:33:29.339213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.443  [2024-12-16 06:33:29.339234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.443  [2024-12-16 06:33:29.343291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.443  [2024-12-16 06:33:29.343466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.443  [2024-12-16 06:33:29.343486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.443  [2024-12-16 06:33:29.347354] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.443  [2024-12-16 06:33:29.347709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.443  [2024-12-16 06:33:29.347751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.443  [2024-12-16 06:33:29.351480] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.443  [2024-12-16 06:33:29.351617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.443  [2024-12-16 06:33:29.351638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.443  [2024-12-16 06:33:29.355605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.443  [2024-12-16 06:33:29.355774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.443  [2024-12-16 06:33:29.355802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.443  [2024-12-16 06:33:29.359655] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.443  [2024-12-16 06:33:29.359956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.443  [2024-12-16 06:33:29.359994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.443  [2024-12-16 06:33:29.363632] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.443  [2024-12-16 06:33:29.363786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.443  [2024-12-16 06:33:29.363808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.443  [2024-12-16 06:33:29.367802] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.443  [2024-12-16 06:33:29.367966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.443  [2024-12-16 06:33:29.367987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.443  [2024-12-16 06:33:29.371905] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.443  [2024-12-16 06:33:29.372172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.443  [2024-12-16 06:33:29.372234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.443  [2024-12-16 06:33:29.376059] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.443  [2024-12-16 06:33:29.376181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.443  [2024-12-16 06:33:29.376202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.443  [2024-12-16 06:33:29.380785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.443  [2024-12-16 06:33:29.381006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.443  [2024-12-16 06:33:29.381026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.443  [2024-12-16 06:33:29.384939] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.443  [2024-12-16 06:33:29.385101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.443  [2024-12-16 06:33:29.385121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.443  [2024-12-16 06:33:29.389071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.443  [2024-12-16 06:33:29.389257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.443  [2024-12-16 06:33:29.389277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.443  [2024-12-16 06:33:29.393175] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.443  [2024-12-16 06:33:29.393266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.443  [2024-12-16 06:33:29.393287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.443  [2024-12-16 06:33:29.397266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.443  [2024-12-16 06:33:29.397355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.443  [2024-12-16 06:33:29.397376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.443  [2024-12-16 06:33:29.401459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.443  [2024-12-16 06:33:29.401653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.443  [2024-12-16 06:33:29.401674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.443  [2024-12-16 06:33:29.405584] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.443  [2024-12-16 06:33:29.405848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.443  [2024-12-16 06:33:29.405898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.443  [2024-12-16 06:33:29.409626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.443  [2024-12-16 06:33:29.409757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.443  [2024-12-16 06:33:29.409779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.443  [2024-12-16 06:33:29.414101] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.443  [2024-12-16 06:33:29.414247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.443  [2024-12-16 06:33:29.414268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.418310] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.704  [2024-12-16 06:33:29.418420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.704  [2024-12-16 06:33:29.418441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.422744] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.704  [2024-12-16 06:33:29.422938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.704  [2024-12-16 06:33:29.422960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.426776] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.704  [2024-12-16 06:33:29.426911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.704  [2024-12-16 06:33:29.426931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.430842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.704  [2024-12-16 06:33:29.430991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.704  [2024-12-16 06:33:29.431013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.434829] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.704  [2024-12-16 06:33:29.434984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.704  [2024-12-16 06:33:29.435005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.438923] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.704  [2024-12-16 06:33:29.439168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.704  [2024-12-16 06:33:29.439189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.442885] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.704  [2024-12-16 06:33:29.443114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.704  [2024-12-16 06:33:29.443135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.447005] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.704  [2024-12-16 06:33:29.447169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.704  [2024-12-16 06:33:29.447191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.450991] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.704  [2024-12-16 06:33:29.451087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.704  [2024-12-16 06:33:29.451107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.455095] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.704  [2024-12-16 06:33:29.455187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.704  [2024-12-16 06:33:29.455208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.459100] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.704  [2024-12-16 06:33:29.459225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.704  [2024-12-16 06:33:29.459246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.463084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.704  [2024-12-16 06:33:29.463190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.704  [2024-12-16 06:33:29.463212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.467098] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.704  [2024-12-16 06:33:29.467268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.704  [2024-12-16 06:33:29.467289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.471199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.704  [2024-12-16 06:33:29.471383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.704  [2024-12-16 06:33:29.471403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.475219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.704  [2024-12-16 06:33:29.475330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.704  [2024-12-16 06:33:29.475351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.479336] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.704  [2024-12-16 06:33:29.479474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.704  [2024-12-16 06:33:29.479494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.483320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.704  [2024-12-16 06:33:29.483420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.704  [2024-12-16 06:33:29.483442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.487390] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.704  [2024-12-16 06:33:29.487580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.704  [2024-12-16 06:33:29.487601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.491339] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.704  [2024-12-16 06:33:29.491473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.704  [2024-12-16 06:33:29.491494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.495328] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.704  [2024-12-16 06:33:29.495425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.704  [2024-12-16 06:33:29.495446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.499364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.704  [2024-12-16 06:33:29.499575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.704  [2024-12-16 06:33:29.499596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.503365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.704  [2024-12-16 06:33:29.503704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.704  [2024-12-16 06:33:29.503737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.507330] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.704  [2024-12-16 06:33:29.507435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.704  [2024-12-16 06:33:29.507457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.511328] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.704  [2024-12-16 06:33:29.511501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.704  [2024-12-16 06:33:29.511533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.515341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.704  [2024-12-16 06:33:29.515493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.704  [2024-12-16 06:33:29.515514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.519371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.704  [2024-12-16 06:33:29.519492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.704  [2024-12-16 06:33:29.519528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.704  [2024-12-16 06:33:29.523410] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.523612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.523633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.527373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.527484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.527506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.531401] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.531563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.531583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.535450] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.535567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.535588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.539453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.539616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.539637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.543415] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.543608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.543629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.547430] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.547798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.547834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.551610] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.551703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.551724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.555723] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.555902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.555922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.559727] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.559827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.559849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.563790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.563974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.563994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.567714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.567832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.567852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.571737] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.571820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.571841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.575872] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.576039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.576060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.579981] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.580188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.580209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.584140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.584237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.584258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.588234] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.588382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.588403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.592122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.592233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.592254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.596154] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.596311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.596333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.600329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.600472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.600508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.604352] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.604446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.604468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.608548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.608721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.608742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.612543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.612850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.612890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.616447] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.616639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.616660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.620501] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.620670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.620692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.624508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.624684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.624708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.628473] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.628645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.628667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.705  [2024-12-16 06:33:29.632602] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.705  [2024-12-16 06:33:29.632753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.705  [2024-12-16 06:33:29.632775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.706  [2024-12-16 06:33:29.636539] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.706  [2024-12-16 06:33:29.636650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.706  [2024-12-16 06:33:29.636672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.706  [2024-12-16 06:33:29.640570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.706  [2024-12-16 06:33:29.640761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.706  [2024-12-16 06:33:29.640783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.706  [2024-12-16 06:33:29.644608] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.706  [2024-12-16 06:33:29.644749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.706  [2024-12-16 06:33:29.644770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.706  [2024-12-16 06:33:29.648610] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.706  [2024-12-16 06:33:29.648715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.706  [2024-12-16 06:33:29.648736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.706  [2024-12-16 06:33:29.652738] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.706  [2024-12-16 06:33:29.652919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.706  [2024-12-16 06:33:29.652940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.706  [2024-12-16 06:33:29.656710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.706  [2024-12-16 06:33:29.656870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.706  [2024-12-16 06:33:29.656907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.706  [2024-12-16 06:33:29.660620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.706  [2024-12-16 06:33:29.660874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.706  [2024-12-16 06:33:29.660922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.706  [2024-12-16 06:33:29.664736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.706  [2024-12-16 06:33:29.664945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.706  [2024-12-16 06:33:29.664984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.706  [2024-12-16 06:33:29.668669] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.706  [2024-12-16 06:33:29.668807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.706  [2024-12-16 06:33:29.668829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.706  [2024-12-16 06:33:29.672676] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.706  [2024-12-16 06:33:29.672785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.706  [2024-12-16 06:33:29.672807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.706  [2024-12-16 06:33:29.677037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.706  [2024-12-16 06:33:29.677157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.706  [2024-12-16 06:33:29.677179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.966  [2024-12-16 06:33:29.681280] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.966  [2024-12-16 06:33:29.681393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.966  [2024-12-16 06:33:29.681414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.966  [2024-12-16 06:33:29.685611] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.966  [2024-12-16 06:33:29.685785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.966  [2024-12-16 06:33:29.685806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.966  [2024-12-16 06:33:29.689618] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.966  [2024-12-16 06:33:29.689940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.966  [2024-12-16 06:33:29.689989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.966  [2024-12-16 06:33:29.693615] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.966  [2024-12-16 06:33:29.693811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.966  [2024-12-16 06:33:29.693832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.966  [2024-12-16 06:33:29.697676] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.966  [2024-12-16 06:33:29.697803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.966  [2024-12-16 06:33:29.697823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.966  [2024-12-16 06:33:29.701737] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.966  [2024-12-16 06:33:29.701845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.966  [2024-12-16 06:33:29.701866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.966  [2024-12-16 06:33:29.705876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.966  [2024-12-16 06:33:29.706003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.966  [2024-12-16 06:33:29.706024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.966  [2024-12-16 06:33:29.710043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.966  [2024-12-16 06:33:29.710181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.966  [2024-12-16 06:33:29.710203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.966  [2024-12-16 06:33:29.714138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.966  [2024-12-16 06:33:29.714243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.966  [2024-12-16 06:33:29.714266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.966  [2024-12-16 06:33:29.718351] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.966  [2024-12-16 06:33:29.718583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.966  [2024-12-16 06:33:29.718607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.966  [2024-12-16 06:33:29.722535] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.966  [2024-12-16 06:33:29.722694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.966  [2024-12-16 06:33:29.722717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.966  [2024-12-16 06:33:29.726852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.966  [2024-12-16 06:33:29.727048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.966  [2024-12-16 06:33:29.727069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.966  [2024-12-16 06:33:29.731304] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.966  [2024-12-16 06:33:29.731465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.966  [2024-12-16 06:33:29.731487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.966  [2024-12-16 06:33:29.735644] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.966  [2024-12-16 06:33:29.735970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.966  [2024-12-16 06:33:29.736005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.966  [2024-12-16 06:33:29.739886] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.966  [2024-12-16 06:33:29.740020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.966  [2024-12-16 06:33:29.740050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.966  [2024-12-16 06:33:29.744287] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.966  [2024-12-16 06:33:29.744460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.966  [2024-12-16 06:33:29.744513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.966  [2024-12-16 06:33:29.748495] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.966  [2024-12-16 06:33:29.748775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.966  [2024-12-16 06:33:29.748814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.966  [2024-12-16 06:33:29.752638] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.966  [2024-12-16 06:33:29.752870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.966  [2024-12-16 06:33:29.752902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.966  [2024-12-16 06:33:29.756976] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.757177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.757198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.761039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.761151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.761173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.765185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.765344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.765366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.769221] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.769404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.769425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.773199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.773350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.773373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.777248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.777424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.777446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.781295] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.781642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.781679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.785407] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.785684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.785731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.789381] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.789550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.789571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.793342] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.793457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.793478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.797383] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.797548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.797570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.801409] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.801527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.801590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.805414] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.805542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.805564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.809462] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.809658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.809680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.813403] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.813751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.813797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.817468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.817692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.817713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.821512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.821705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.821726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.825463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.825613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.825635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.829556] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.829714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.829736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.833560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.833686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.833708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.837613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.837742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.837763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.841626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.841811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.841832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.845619] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.845806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.845828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.849626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.849741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.849762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.853735] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.853927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.853948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.857685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.857802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.857823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.861739] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.861903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.861924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.967  [2024-12-16 06:33:29.865754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.967  [2024-12-16 06:33:29.865884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.967  [2024-12-16 06:33:29.865905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.968  [2024-12-16 06:33:29.869690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.968  [2024-12-16 06:33:29.869794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.968  [2024-12-16 06:33:29.869816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.968  [2024-12-16 06:33:29.873657] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.968  [2024-12-16 06:33:29.873832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.968  [2024-12-16 06:33:29.873854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.968  [2024-12-16 06:33:29.877562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.968  [2024-12-16 06:33:29.877760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.968  [2024-12-16 06:33:29.877782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.968  [2024-12-16 06:33:29.881452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.968  [2024-12-16 06:33:29.881600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.968  [2024-12-16 06:33:29.881621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.968  [2024-12-16 06:33:29.885598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.968  [2024-12-16 06:33:29.885768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.968  [2024-12-16 06:33:29.885789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.968  [2024-12-16 06:33:29.889521] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.968  [2024-12-16 06:33:29.889644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.968  [2024-12-16 06:33:29.889666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.968  [2024-12-16 06:33:29.893517] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.968  [2024-12-16 06:33:29.893687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.968  [2024-12-16 06:33:29.893709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.968  [2024-12-16 06:33:29.897538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.968  [2024-12-16 06:33:29.897692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.968  [2024-12-16 06:33:29.897714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.968  [2024-12-16 06:33:29.901519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.968  [2024-12-16 06:33:29.901633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.968  [2024-12-16 06:33:29.901655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.968  [2024-12-16 06:33:29.905498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.968  [2024-12-16 06:33:29.905680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.968  [2024-12-16 06:33:29.905702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.968  [2024-12-16 06:33:29.909577] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.968  [2024-12-16 06:33:29.909917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.968  [2024-12-16 06:33:29.909956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.968  [2024-12-16 06:33:29.913343] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.968  [2024-12-16 06:33:29.913598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.968  [2024-12-16 06:33:29.913619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.968  [2024-12-16 06:33:29.917394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.968  [2024-12-16 06:33:29.917578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.968  [2024-12-16 06:33:29.917601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.968  [2024-12-16 06:33:29.921402] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.968  [2024-12-16 06:33:29.921644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.968  [2024-12-16 06:33:29.921666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:12.968  [2024-12-16 06:33:29.925418] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.968  [2024-12-16 06:33:29.925550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.968  [2024-12-16 06:33:29.925573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:12.968  [2024-12-16 06:33:29.929510] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.968  [2024-12-16 06:33:29.929718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.968  [2024-12-16 06:33:29.929739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:12.968  [2024-12-16 06:33:29.933385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.968  [2024-12-16 06:33:29.933540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.968  [2024-12-16 06:33:29.933562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:12.968  [2024-12-16 06:33:29.937573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:12.968  [2024-12-16 06:33:29.937684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:12.968  [2024-12-16 06:33:29.937705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.228  [2024-12-16 06:33:29.942080] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.228  [2024-12-16 06:33:29.942233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.228  [2024-12-16 06:33:29.942254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.228  [2024-12-16 06:33:29.946387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.228  [2024-12-16 06:33:29.946564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.228  [2024-12-16 06:33:29.946587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.228  [2024-12-16 06:33:29.950464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.228  [2024-12-16 06:33:29.950684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.228  [2024-12-16 06:33:29.950707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.228  [2024-12-16 06:33:29.954570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.228  [2024-12-16 06:33:29.954859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.228  [2024-12-16 06:33:29.954921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.228  [2024-12-16 06:33:29.958594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.228  [2024-12-16 06:33:29.958691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.228  [2024-12-16 06:33:29.958715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.228  [2024-12-16 06:33:29.962859] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.228  [2024-12-16 06:33:29.963035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.228  [2024-12-16 06:33:29.963056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.228  [2024-12-16 06:33:29.966933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.228  [2024-12-16 06:33:29.967037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.228  [2024-12-16 06:33:29.967059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.228  [2024-12-16 06:33:29.971017] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.228  [2024-12-16 06:33:29.971166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.228  [2024-12-16 06:33:29.971187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.228  [2024-12-16 06:33:29.975036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.228  [2024-12-16 06:33:29.975155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.228  [2024-12-16 06:33:29.975175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.228  [2024-12-16 06:33:29.979027] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.228  [2024-12-16 06:33:29.979153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.228  [2024-12-16 06:33:29.979173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.228  [2024-12-16 06:33:29.983042] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.228  [2024-12-16 06:33:29.983216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.228  [2024-12-16 06:33:29.983237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.228  [2024-12-16 06:33:29.987161] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.228  [2024-12-16 06:33:29.987410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.228  [2024-12-16 06:33:29.987487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.228  [2024-12-16 06:33:29.991131] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.228  [2024-12-16 06:33:29.991372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.228  [2024-12-16 06:33:29.991441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.228  [2024-12-16 06:33:29.995191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.228  [2024-12-16 06:33:29.995362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.228  [2024-12-16 06:33:29.995382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.228  [2024-12-16 06:33:29.999186] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.228  [2024-12-16 06:33:29.999282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.228  [2024-12-16 06:33:29.999304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.228  [2024-12-16 06:33:30.004085] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.228  [2024-12-16 06:33:30.004283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.228  [2024-12-16 06:33:30.004305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.228  [2024-12-16 06:33:30.008605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.228  [2024-12-16 06:33:30.008723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.228  [2024-12-16 06:33:30.008745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.228  [2024-12-16 06:33:30.012972] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.228  [2024-12-16 06:33:30.013083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.228  [2024-12-16 06:33:30.013104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.228  [2024-12-16 06:33:30.017222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.017424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.017446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.021507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.021727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.021754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.027090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.027271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.027293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.032045] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.032245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.032266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.036205] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.036330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.036351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.040320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.040482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.040504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.044456] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.044657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.044678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.048554] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.048664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.048685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.052755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.052916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.052937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.056830] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.056993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.057014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.060930] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.061078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.061098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.065077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.065250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.065271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.069213] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.069340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.069362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.073381] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.073534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.073555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.077406] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.077576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.077597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.081441] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.081604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.081625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.085573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.085724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.085745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.089613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.089819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.089855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.093545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.093666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.093686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.097722] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.097862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.097882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.101812] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.101928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.101948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.105896] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.106045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.106066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.109929] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.110054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.110076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.114042] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.114163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.114183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.118104] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.118275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.118296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.122126] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.122409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.122458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.126131] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.126254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.126276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.130288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.229  [2024-12-16 06:33:30.130542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.229  [2024-12-16 06:33:30.130566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.229  [2024-12-16 06:33:30.134359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.230  [2024-12-16 06:33:30.134465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.230  [2024-12-16 06:33:30.134525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.230  [2024-12-16 06:33:30.138390] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.230  [2024-12-16 06:33:30.138575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.230  [2024-12-16 06:33:30.138597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.230  [2024-12-16 06:33:30.142415] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.230  [2024-12-16 06:33:30.142645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.230  [2024-12-16 06:33:30.142666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.230  [2024-12-16 06:33:30.146388] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.230  [2024-12-16 06:33:30.146533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.230  [2024-12-16 06:33:30.146555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.230  [2024-12-16 06:33:30.150447] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.230  [2024-12-16 06:33:30.150636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.230  [2024-12-16 06:33:30.150657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.230  [2024-12-16 06:33:30.154537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.230  [2024-12-16 06:33:30.154738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.230  [2024-12-16 06:33:30.154775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.230  [2024-12-16 06:33:30.158440] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.230  [2024-12-16 06:33:30.158591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.230  [2024-12-16 06:33:30.158613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.230  [2024-12-16 06:33:30.162572] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.230  [2024-12-16 06:33:30.162693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.230  [2024-12-16 06:33:30.162715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.230  [2024-12-16 06:33:30.166596] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.230  [2024-12-16 06:33:30.166710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.230  [2024-12-16 06:33:30.166731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.230  [2024-12-16 06:33:30.170618] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.230  [2024-12-16 06:33:30.170815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.230  [2024-12-16 06:33:30.170836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.230  [2024-12-16 06:33:30.174579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.230  [2024-12-16 06:33:30.174721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.230  [2024-12-16 06:33:30.174744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.230  [2024-12-16 06:33:30.178515] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.230  [2024-12-16 06:33:30.178653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.230  [2024-12-16 06:33:30.178674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.230  [2024-12-16 06:33:30.182451] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.230  [2024-12-16 06:33:30.182661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.230  [2024-12-16 06:33:30.182683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.230  [2024-12-16 06:33:30.186562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.230  [2024-12-16 06:33:30.186886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.230  [2024-12-16 06:33:30.186935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.230  [2024-12-16 06:33:30.190629] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.230  [2024-12-16 06:33:30.190759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.230  [2024-12-16 06:33:30.190809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.230  [2024-12-16 06:33:30.194666] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.230  [2024-12-16 06:33:30.194845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.230  [2024-12-16 06:33:30.194865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.230  [2024-12-16 06:33:30.198894] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.230  [2024-12-16 06:33:30.199072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.230  [2024-12-16 06:33:30.199092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.490  [2024-12-16 06:33:30.203336] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.490  [2024-12-16 06:33:30.203514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.490  [2024-12-16 06:33:30.203548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.490  [2024-12-16 06:33:30.207435] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.490  [2024-12-16 06:33:30.207601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.490  [2024-12-16 06:33:30.207622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.490  [2024-12-16 06:33:30.211658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.490  [2024-12-16 06:33:30.211776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.490  [2024-12-16 06:33:30.211799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.490  [2024-12-16 06:33:30.215698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.490  [2024-12-16 06:33:30.215859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.490  [2024-12-16 06:33:30.215880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.490  [2024-12-16 06:33:30.219777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.490  [2024-12-16 06:33:30.219964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.490  [2024-12-16 06:33:30.219985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.490  [2024-12-16 06:33:30.223862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.490  [2024-12-16 06:33:30.223991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.490  [2024-12-16 06:33:30.224011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.490  [2024-12-16 06:33:30.227911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.490  [2024-12-16 06:33:30.228104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.490  [2024-12-16 06:33:30.228125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.490  [2024-12-16 06:33:30.231900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.490  [2024-12-16 06:33:30.231997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.490  [2024-12-16 06:33:30.232018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.490  [2024-12-16 06:33:30.235932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.490  [2024-12-16 06:33:30.236074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.490  [2024-12-16 06:33:30.236093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.490  [2024-12-16 06:33:30.239801] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.490  [2024-12-16 06:33:30.239911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.490  [2024-12-16 06:33:30.239931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.490  [2024-12-16 06:33:30.243779] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.490  [2024-12-16 06:33:30.243892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.490  [2024-12-16 06:33:30.243914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.490  [2024-12-16 06:33:30.247955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.490  [2024-12-16 06:33:30.248119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.490  [2024-12-16 06:33:30.248140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.490  [2024-12-16 06:33:30.251983] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.490  [2024-12-16 06:33:30.252197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.490  [2024-12-16 06:33:30.252217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.490  [2024-12-16 06:33:30.256047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.490  [2024-12-16 06:33:30.256166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.490  [2024-12-16 06:33:30.256187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.490  [2024-12-16 06:33:30.260129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.490  [2024-12-16 06:33:30.260318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.490  [2024-12-16 06:33:30.260338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.490  [2024-12-16 06:33:30.264123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.490  [2024-12-16 06:33:30.264220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.490  [2024-12-16 06:33:30.264241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.490  [2024-12-16 06:33:30.268151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.490  [2024-12-16 06:33:30.268324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.490  [2024-12-16 06:33:30.268345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.272128] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.272252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.272273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.276091] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.276234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.276255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.280174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.280335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.280356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.284008] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.284262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.284340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.287999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.288122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.288142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.292026] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.292189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.292209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.295997] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.296249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.296327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.299971] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.300094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.300116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.303993] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.304160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.304181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.308046] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.308306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.308385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.311967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.312090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.312111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.316021] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.316190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.316210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.320024] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.320327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.320376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.324010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.324133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.324153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.328098] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.328281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.328302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.332160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.332379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.332399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.336220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.336423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.336444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.340348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.340529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.340550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.344356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.344456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.344476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.348428] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.348647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.348668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.352409] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.352551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.352572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.356405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.356569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.356590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.360447] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.360637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.360658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.364570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.364758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.364780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.368471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.368663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.368685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.372655] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.372904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.372941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.376640] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.376754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.376775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.491  [2024-12-16 06:33:30.380647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.491  [2024-12-16 06:33:30.380804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.491  [2024-12-16 06:33:30.380825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.492  [2024-12-16 06:33:30.384731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.492  [2024-12-16 06:33:30.384895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.492  [2024-12-16 06:33:30.384914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.492  [2024-12-16 06:33:30.388726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.492  [2024-12-16 06:33:30.388850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.492  [2024-12-16 06:33:30.388872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.492  [2024-12-16 06:33:30.392794] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.492  [2024-12-16 06:33:30.392977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.492  [2024-12-16 06:33:30.392997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.492  [2024-12-16 06:33:30.396677] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.492  [2024-12-16 06:33:30.396896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.492  [2024-12-16 06:33:30.396917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.492  [2024-12-16 06:33:30.400691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.492  [2024-12-16 06:33:30.400805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.492  [2024-12-16 06:33:30.400827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.492  [2024-12-16 06:33:30.404780] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.492  [2024-12-16 06:33:30.404988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.492  [2024-12-16 06:33:30.405009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.492  [2024-12-16 06:33:30.408803] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.492  [2024-12-16 06:33:30.408918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.492  [2024-12-16 06:33:30.408938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.492  [2024-12-16 06:33:30.412806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.492  [2024-12-16 06:33:30.412986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.492  [2024-12-16 06:33:30.413006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.492  [2024-12-16 06:33:30.416813] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.492  [2024-12-16 06:33:30.416962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.492  [2024-12-16 06:33:30.416982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.492  [2024-12-16 06:33:30.420731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.492  [2024-12-16 06:33:30.420852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.492  [2024-12-16 06:33:30.420873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.492  [2024-12-16 06:33:30.424801] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.492  [2024-12-16 06:33:30.424957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.492  [2024-12-16 06:33:30.424977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.492  [2024-12-16 06:33:30.428769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.492  [2024-12-16 06:33:30.429072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.492  [2024-12-16 06:33:30.429120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.492  [2024-12-16 06:33:30.432693] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.492  [2024-12-16 06:33:30.432786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.492  [2024-12-16 06:33:30.432808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.492  [2024-12-16 06:33:30.436835] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.492  [2024-12-16 06:33:30.436976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.492  [2024-12-16 06:33:30.436997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.492  [2024-12-16 06:33:30.440807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.492  [2024-12-16 06:33:30.440899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.492  [2024-12-16 06:33:30.440921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.492  [2024-12-16 06:33:30.444830] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.492  [2024-12-16 06:33:30.444978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.492  [2024-12-16 06:33:30.444999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.492  [2024-12-16 06:33:30.448849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.492  [2024-12-16 06:33:30.448958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.492  [2024-12-16 06:33:30.448979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.492  [2024-12-16 06:33:30.452846] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.492  [2024-12-16 06:33:30.452958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.492  [2024-12-16 06:33:30.452979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.492  [2024-12-16 06:33:30.456836] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.492  [2024-12-16 06:33:30.457017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.492  [2024-12-16 06:33:30.457038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.492  [2024-12-16 06:33:30.460998] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.492  [2024-12-16 06:33:30.461245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.492  [2024-12-16 06:33:30.461323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.752  [2024-12-16 06:33:30.465392] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.752  [2024-12-16 06:33:30.465486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.752  [2024-12-16 06:33:30.465521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.752  [2024-12-16 06:33:30.469649] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.752  [2024-12-16 06:33:30.469807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.752  [2024-12-16 06:33:30.469827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.752  [2024-12-16 06:33:30.473927] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.752  [2024-12-16 06:33:30.474025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.752  [2024-12-16 06:33:30.474045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.752  [2024-12-16 06:33:30.478044] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.752  [2024-12-16 06:33:30.478209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.478229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.482145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.482252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.482272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.486274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.486386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.486407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.490440] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.490646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.490667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.494397] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.494665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.494689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.498541] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.498793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.498870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.502568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.502729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.502750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.506622] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.506725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.506748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.510704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.510870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.510891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.514681] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.514918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.514970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.518643] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.518770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.518823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.522686] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.522874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.522894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.526613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.526816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.526853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.530552] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.530656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.530686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.534833] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.535041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.535061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.538741] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.538842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.538862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.542781] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.542931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.542951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.546751] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.546883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.546904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.550773] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.550878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.550899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.554809] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.554976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.554996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.558807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.559128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.559177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.562857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.562976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.562996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.566886] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.567061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.567081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.570942] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.571050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.571070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.575008] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.575176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.575197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.578930] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.579051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.579072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.582975] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.583064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.583084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.586973] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.587144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.587164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.591031] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.753  [2024-12-16 06:33:30.591239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.753  [2024-12-16 06:33:30.591261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.753  [2024-12-16 06:33:30.594977] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.595158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.595178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.600098] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.600231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.600269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.604506] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.604683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.604705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.608857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.609042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.609065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.612955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.613079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.613100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.617013] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.617116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.617137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.621220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.621391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.621412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.625221] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.625401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.625421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.629228] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.629345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.629366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.633330] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.633529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.633550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.637303] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.637391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.637412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.641419] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.641576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.641597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.645464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.645577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.645597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.649471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.649575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.649596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.653507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.653676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.653696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.657457] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.657632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.657652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.661353] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.661478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.661515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.665414] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.665574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.665595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.669347] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.669493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.669526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.673402] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.673556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.673577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.677465] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.677605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.677625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.681599] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.681690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.681711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.685740] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.685904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.685925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.689624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.689813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.689834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.693674] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.693768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.693790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.697698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.697905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.697925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.701688] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.701788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.701809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.754  [2024-12-16 06:33:30.705714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.754  [2024-12-16 06:33:30.705894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.754  [2024-12-16 06:33:30.705915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.755  [2024-12-16 06:33:30.709844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.755  [2024-12-16 06:33:30.709969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.755  [2024-12-16 06:33:30.709991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:13.755  [2024-12-16 06:33:30.713840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.755  [2024-12-16 06:33:30.713971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.755  [2024-12-16 06:33:30.713992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:13.755  [2024-12-16 06:33:30.717876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.755  [2024-12-16 06:33:30.718048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.755  [2024-12-16 06:33:30.718069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:13.755  [2024-12-16 06:33:30.721910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:13.755  [2024-12-16 06:33:30.722176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:13.755  [2024-12-16 06:33:30.722196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:13.755  [2024-12-16 06:33:30.726246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:14.013  [2024-12-16 06:33:30.726437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:14.013  [2024-12-16 06:33:30.726458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:14.013  [2024-12-16 06:33:30.730453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:14.013  [2024-12-16 06:33:30.730659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:14.013  [2024-12-16 06:33:30.730681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:14.013  [2024-12-16 06:33:30.734845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:14.013  [2024-12-16 06:33:30.734957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:14.014  [2024-12-16 06:33:30.734977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:23:14.014  [2024-12-16 06:33:30.738965] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:14.014  [2024-12-16 06:33:30.739186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:14.014  [2024-12-16 06:33:30.739206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:23:14.014  [2024-12-16 06:33:30.743099] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:14.014  [2024-12-16 06:33:30.743254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:14.014  [2024-12-16 06:33:30.743274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:23:14.014  [2024-12-16 06:33:30.747120] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614a90) with pdu=0x2000190fef90
00:23:14.014  [2024-12-16 06:33:30.747209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:23:14.014  [2024-12-16 06:33:30.747229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:23:14.014  
00:23:14.014                                                                                                  Latency(us)
00:23:14.014  
[2024-12-16T06:33:30.990Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:14.014  
[2024-12-16T06:33:30.990Z]  Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072)
00:23:14.014  	 nvme0n1             :       2.00    7605.40     950.67       0.00     0.00    2098.99    1541.59    5510.98
00:23:14.014  
[2024-12-16T06:33:30.990Z]  ===================================================================================================================
00:23:14.014  
[2024-12-16T06:33:30.990Z]  Total                       :               7605.40     950.67       0.00     0.00    2098.99    1541.59    5510.98
00:23:14.014  0
00:23:14.014    06:33:30	-- host/digest.sh@71 -- # get_transient_errcount nvme0n1
00:23:14.014    06:33:30	-- host/digest.sh@28 -- # jq -r '.bdevs[0]
00:23:14.014  			| .driver_specific
00:23:14.014  			| .nvme_error
00:23:14.014  			| .status_code
00:23:14.014  			| .command_transient_transport_error'
00:23:14.014    06:33:30	-- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1
00:23:14.014    06:33:30	-- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1
00:23:14.275   06:33:31	-- host/digest.sh@71 -- # (( 491 > 0 ))
00:23:14.275   06:33:31	-- host/digest.sh@73 -- # killprocess 87261
00:23:14.275   06:33:31	-- common/autotest_common.sh@936 -- # '[' -z 87261 ']'
00:23:14.275   06:33:31	-- common/autotest_common.sh@940 -- # kill -0 87261
00:23:14.275    06:33:31	-- common/autotest_common.sh@941 -- # uname
00:23:14.275   06:33:31	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:23:14.276    06:33:31	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87261
00:23:14.276  killing process with pid 87261
00:23:14.276  Received shutdown signal, test time was about 2.000000 seconds
00:23:14.276  
00:23:14.276                                                                                                  Latency(us)
00:23:14.276  
[2024-12-16T06:33:31.252Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:14.276  
[2024-12-16T06:33:31.252Z]  ===================================================================================================================
00:23:14.276  
[2024-12-16T06:33:31.252Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:23:14.276   06:33:31	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:23:14.276   06:33:31	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:23:14.276   06:33:31	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 87261'
00:23:14.276   06:33:31	-- common/autotest_common.sh@955 -- # kill 87261
00:23:14.276   06:33:31	-- common/autotest_common.sh@960 -- # wait 87261
00:23:14.536   06:33:31	-- host/digest.sh@115 -- # killprocess 86949
00:23:14.536   06:33:31	-- common/autotest_common.sh@936 -- # '[' -z 86949 ']'
00:23:14.536   06:33:31	-- common/autotest_common.sh@940 -- # kill -0 86949
00:23:14.536    06:33:31	-- common/autotest_common.sh@941 -- # uname
00:23:14.536   06:33:31	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:23:14.536    06:33:31	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86949
00:23:14.536  killing process with pid 86949
00:23:14.536   06:33:31	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:23:14.536   06:33:31	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:23:14.536   06:33:31	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 86949'
00:23:14.536   06:33:31	-- common/autotest_common.sh@955 -- # kill 86949
00:23:14.536   06:33:31	-- common/autotest_common.sh@960 -- # wait 86949
00:23:14.794  
00:23:14.794  real	0m18.572s
00:23:14.794  user	0m33.821s
00:23:14.794  sys	0m5.766s
00:23:14.794  ************************************
00:23:14.794  END TEST nvmf_digest_error
00:23:14.794  ************************************
00:23:14.794   06:33:31	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:23:14.794   06:33:31	-- common/autotest_common.sh@10 -- # set +x
00:23:14.794   06:33:31	-- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT
00:23:14.794   06:33:31	-- host/digest.sh@139 -- # nvmftestfini
00:23:14.794   06:33:31	-- nvmf/common.sh@476 -- # nvmfcleanup
00:23:14.794   06:33:31	-- nvmf/common.sh@116 -- # sync
00:23:14.794   06:33:31	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:23:14.794   06:33:31	-- nvmf/common.sh@119 -- # set +e
00:23:14.794   06:33:31	-- nvmf/common.sh@120 -- # for i in {1..20}
00:23:14.794   06:33:31	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:23:14.794  rmmod nvme_tcp
00:23:14.794  rmmod nvme_fabrics
00:23:15.052  rmmod nvme_keyring
00:23:15.052   06:33:31	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:23:15.052   06:33:31	-- nvmf/common.sh@123 -- # set -e
00:23:15.052   06:33:31	-- nvmf/common.sh@124 -- # return 0
00:23:15.052   06:33:31	-- nvmf/common.sh@477 -- # '[' -n 86949 ']'
00:23:15.052   06:33:31	-- nvmf/common.sh@478 -- # killprocess 86949
00:23:15.052   06:33:31	-- common/autotest_common.sh@936 -- # '[' -z 86949 ']'
00:23:15.052  Process with pid 86949 is not found
00:23:15.052   06:33:31	-- common/autotest_common.sh@940 -- # kill -0 86949
00:23:15.052  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (86949) - No such process
00:23:15.052   06:33:31	-- common/autotest_common.sh@963 -- # echo 'Process with pid 86949 is not found'
00:23:15.052   06:33:31	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:23:15.052   06:33:31	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:23:15.052   06:33:31	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:23:15.052   06:33:31	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:23:15.052   06:33:31	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:23:15.052   06:33:31	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:23:15.052   06:33:31	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:23:15.052    06:33:31	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:23:15.052   06:33:31	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:23:15.052  
00:23:15.052  real	0m38.439s
00:23:15.052  user	1m8.711s
00:23:15.052  sys	0m11.856s
00:23:15.052   06:33:31	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:23:15.052   06:33:31	-- common/autotest_common.sh@10 -- # set +x
00:23:15.052  ************************************
00:23:15.052  END TEST nvmf_digest
00:23:15.052  ************************************
00:23:15.052   06:33:31	-- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]]
00:23:15.052   06:33:31	-- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]]
00:23:15.052   06:33:31	-- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp
00:23:15.052   06:33:31	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:23:15.052   06:33:31	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:23:15.052   06:33:31	-- common/autotest_common.sh@10 -- # set +x
00:23:15.052  ************************************
00:23:15.052  START TEST nvmf_mdns_discovery
00:23:15.052  ************************************
00:23:15.052   06:33:31	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp
00:23:15.052  * Looking for test storage...
00:23:15.052  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:23:15.052    06:33:31	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:23:15.053     06:33:31	-- common/autotest_common.sh@1690 -- # lcov --version
00:23:15.053     06:33:31	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:23:15.315    06:33:32	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:23:15.315    06:33:32	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:23:15.315    06:33:32	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:23:15.315    06:33:32	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:23:15.315    06:33:32	-- scripts/common.sh@335 -- # IFS=.-:
00:23:15.315    06:33:32	-- scripts/common.sh@335 -- # read -ra ver1
00:23:15.315    06:33:32	-- scripts/common.sh@336 -- # IFS=.-:
00:23:15.315    06:33:32	-- scripts/common.sh@336 -- # read -ra ver2
00:23:15.315    06:33:32	-- scripts/common.sh@337 -- # local 'op=<'
00:23:15.315    06:33:32	-- scripts/common.sh@339 -- # ver1_l=2
00:23:15.315    06:33:32	-- scripts/common.sh@340 -- # ver2_l=1
00:23:15.315    06:33:32	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:23:15.315    06:33:32	-- scripts/common.sh@343 -- # case "$op" in
00:23:15.315    06:33:32	-- scripts/common.sh@344 -- # : 1
00:23:15.315    06:33:32	-- scripts/common.sh@363 -- # (( v = 0 ))
00:23:15.315    06:33:32	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:23:15.315     06:33:32	-- scripts/common.sh@364 -- # decimal 1
00:23:15.315     06:33:32	-- scripts/common.sh@352 -- # local d=1
00:23:15.315     06:33:32	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:23:15.315     06:33:32	-- scripts/common.sh@354 -- # echo 1
00:23:15.315    06:33:32	-- scripts/common.sh@364 -- # ver1[v]=1
00:23:15.315     06:33:32	-- scripts/common.sh@365 -- # decimal 2
00:23:15.315     06:33:32	-- scripts/common.sh@352 -- # local d=2
00:23:15.315     06:33:32	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:23:15.315     06:33:32	-- scripts/common.sh@354 -- # echo 2
00:23:15.315    06:33:32	-- scripts/common.sh@365 -- # ver2[v]=2
00:23:15.315    06:33:32	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:23:15.315    06:33:32	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:23:15.315    06:33:32	-- scripts/common.sh@367 -- # return 0
00:23:15.315    06:33:32	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:23:15.315    06:33:32	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:23:15.315  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:15.315  		--rc genhtml_branch_coverage=1
00:23:15.315  		--rc genhtml_function_coverage=1
00:23:15.315  		--rc genhtml_legend=1
00:23:15.315  		--rc geninfo_all_blocks=1
00:23:15.315  		--rc geninfo_unexecuted_blocks=1
00:23:15.315  		
00:23:15.315  		'
00:23:15.315    06:33:32	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:23:15.315  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:15.315  		--rc genhtml_branch_coverage=1
00:23:15.315  		--rc genhtml_function_coverage=1
00:23:15.315  		--rc genhtml_legend=1
00:23:15.315  		--rc geninfo_all_blocks=1
00:23:15.315  		--rc geninfo_unexecuted_blocks=1
00:23:15.315  		
00:23:15.315  		'
00:23:15.315    06:33:32	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:23:15.315  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:15.315  		--rc genhtml_branch_coverage=1
00:23:15.315  		--rc genhtml_function_coverage=1
00:23:15.315  		--rc genhtml_legend=1
00:23:15.315  		--rc geninfo_all_blocks=1
00:23:15.315  		--rc geninfo_unexecuted_blocks=1
00:23:15.315  		
00:23:15.315  		'
00:23:15.315    06:33:32	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:23:15.315  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:15.315  		--rc genhtml_branch_coverage=1
00:23:15.315  		--rc genhtml_function_coverage=1
00:23:15.315  		--rc genhtml_legend=1
00:23:15.315  		--rc geninfo_all_blocks=1
00:23:15.315  		--rc geninfo_unexecuted_blocks=1
00:23:15.315  		
00:23:15.315  		'
00:23:15.315   06:33:32	-- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:23:15.315     06:33:32	-- nvmf/common.sh@7 -- # uname -s
00:23:15.315    06:33:32	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:23:15.316    06:33:32	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:23:15.316    06:33:32	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:23:15.316    06:33:32	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:23:15.316    06:33:32	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:23:15.316    06:33:32	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:23:15.316    06:33:32	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:23:15.316    06:33:32	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:23:15.316    06:33:32	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:23:15.316     06:33:32	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:23:15.316    06:33:32	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:23:15.316    06:33:32	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:23:15.316    06:33:32	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:23:15.316    06:33:32	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:23:15.316    06:33:32	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:23:15.316    06:33:32	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:23:15.316     06:33:32	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:23:15.316     06:33:32	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:23:15.316     06:33:32	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:23:15.316      06:33:32	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:15.316      06:33:32	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:15.316      06:33:32	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:15.316      06:33:32	-- paths/export.sh@5 -- # export PATH
00:23:15.316      06:33:32	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:15.316    06:33:32	-- nvmf/common.sh@46 -- # : 0
00:23:15.316    06:33:32	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:23:15.316    06:33:32	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:23:15.316    06:33:32	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:23:15.316    06:33:32	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:23:15.316    06:33:32	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:23:15.316    06:33:32	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:23:15.316    06:33:32	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:23:15.316    06:33:32	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:23:15.316   06:33:32	-- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address
00:23:15.316   06:33:32	-- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009
00:23:15.316   06:33:32	-- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery
00:23:15.316   06:33:32	-- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode
00:23:15.316   06:33:32	-- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2
00:23:15.316   06:33:32	-- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test
00:23:15.316   06:33:32	-- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock
00:23:15.316   06:33:32	-- host/mdns_discovery.sh@23 -- # nvmftestinit
00:23:15.316   06:33:32	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:23:15.316   06:33:32	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:23:15.316   06:33:32	-- nvmf/common.sh@436 -- # prepare_net_devs
00:23:15.316   06:33:32	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:23:15.316   06:33:32	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:23:15.316   06:33:32	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:23:15.316   06:33:32	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:23:15.316    06:33:32	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:23:15.316   06:33:32	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:23:15.316   06:33:32	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:23:15.316   06:33:32	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:23:15.316   06:33:32	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:23:15.316   06:33:32	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:23:15.316   06:33:32	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:23:15.316   06:33:32	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:23:15.316   06:33:32	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:23:15.316   06:33:32	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:23:15.316   06:33:32	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:23:15.316   06:33:32	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:23:15.316   06:33:32	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:23:15.316   06:33:32	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:23:15.316   06:33:32	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:23:15.316   06:33:32	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:23:15.316   06:33:32	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:23:15.316   06:33:32	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:23:15.316   06:33:32	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:23:15.316   06:33:32	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:23:15.316   06:33:32	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:23:15.316  Cannot find device "nvmf_tgt_br"
00:23:15.316   06:33:32	-- nvmf/common.sh@154 -- # true
00:23:15.316   06:33:32	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:23:15.316  Cannot find device "nvmf_tgt_br2"
00:23:15.316   06:33:32	-- nvmf/common.sh@155 -- # true
00:23:15.316   06:33:32	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:23:15.316   06:33:32	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:23:15.316  Cannot find device "nvmf_tgt_br"
00:23:15.316   06:33:32	-- nvmf/common.sh@157 -- # true
00:23:15.316   06:33:32	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:23:15.316  Cannot find device "nvmf_tgt_br2"
00:23:15.316   06:33:32	-- nvmf/common.sh@158 -- # true
00:23:15.316   06:33:32	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:23:15.316   06:33:32	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:23:15.316   06:33:32	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:23:15.316  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:23:15.316   06:33:32	-- nvmf/common.sh@161 -- # true
00:23:15.316   06:33:32	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:23:15.316  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:23:15.316   06:33:32	-- nvmf/common.sh@162 -- # true
00:23:15.316   06:33:32	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:23:15.316   06:33:32	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:23:15.316   06:33:32	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:23:15.316   06:33:32	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:23:15.316   06:33:32	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:23:15.316   06:33:32	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:23:15.316   06:33:32	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:23:15.601   06:33:32	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:23:15.601   06:33:32	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:23:15.601   06:33:32	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:23:15.601   06:33:32	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:23:15.601   06:33:32	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:23:15.602   06:33:32	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:23:15.602   06:33:32	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:23:15.602   06:33:32	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:23:15.602   06:33:32	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:23:15.602   06:33:32	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:23:15.602   06:33:32	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:23:15.602   06:33:32	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:23:15.602   06:33:32	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:23:15.602   06:33:32	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:23:15.602   06:33:32	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:23:15.602   06:33:32	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:23:15.602   06:33:32	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:23:15.602  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:23:15.602  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms
00:23:15.602  
00:23:15.602  --- 10.0.0.2 ping statistics ---
00:23:15.602  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:23:15.602  rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms
00:23:15.602   06:33:32	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:23:15.602  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:23:15.602  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms
00:23:15.602  
00:23:15.602  --- 10.0.0.3 ping statistics ---
00:23:15.602  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:23:15.602  rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms
00:23:15.602   06:33:32	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:23:15.602  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:23:15.602  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms
00:23:15.602  
00:23:15.602  --- 10.0.0.1 ping statistics ---
00:23:15.602  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:23:15.602  rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms
00:23:15.602   06:33:32	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:23:15.602   06:33:32	-- nvmf/common.sh@421 -- # return 0
00:23:15.602   06:33:32	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:23:15.602   06:33:32	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:23:15.602   06:33:32	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:23:15.602   06:33:32	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:23:15.602   06:33:32	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:23:15.602   06:33:32	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:23:15.602   06:33:32	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:23:15.602   06:33:32	-- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc
00:23:15.602   06:33:32	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:23:15.602   06:33:32	-- common/autotest_common.sh@722 -- # xtrace_disable
00:23:15.602   06:33:32	-- common/autotest_common.sh@10 -- # set +x
00:23:15.602   06:33:32	-- nvmf/common.sh@469 -- # nvmfpid=87561
00:23:15.602   06:33:32	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc
00:23:15.602   06:33:32	-- nvmf/common.sh@470 -- # waitforlisten 87561
00:23:15.602   06:33:32	-- common/autotest_common.sh@829 -- # '[' -z 87561 ']'
00:23:15.602   06:33:32	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:23:15.602   06:33:32	-- common/autotest_common.sh@834 -- # local max_retries=100
00:23:15.602  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:23:15.602   06:33:32	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:23:15.602   06:33:32	-- common/autotest_common.sh@838 -- # xtrace_disable
00:23:15.602   06:33:32	-- common/autotest_common.sh@10 -- # set +x
00:23:15.602  [2024-12-16 06:33:32.519993] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:23:15.602  [2024-12-16 06:33:32.520700] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:23:15.869  [2024-12-16 06:33:32.660610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:15.869  [2024-12-16 06:33:32.773815] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:23:15.869  [2024-12-16 06:33:32.774017] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:23:15.869  [2024-12-16 06:33:32.774038] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:23:15.869  [2024-12-16 06:33:32.774051] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:23:15.869  [2024-12-16 06:33:32.774098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:23:16.804   06:33:33	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:23:16.804   06:33:33	-- common/autotest_common.sh@862 -- # return 0
00:23:16.804   06:33:33	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:23:16.804   06:33:33	-- common/autotest_common.sh@728 -- # xtrace_disable
00:23:16.804   06:33:33	-- common/autotest_common.sh@10 -- # set +x
00:23:16.804   06:33:33	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:23:16.804   06:33:33	-- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address
00:23:16.804   06:33:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:16.804   06:33:33	-- common/autotest_common.sh@10 -- # set +x
00:23:16.804   06:33:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:16.804   06:33:33	-- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init
00:23:16.804   06:33:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:16.804   06:33:33	-- common/autotest_common.sh@10 -- # set +x
00:23:16.804   06:33:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:16.804   06:33:33	-- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:23:16.804   06:33:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:16.804   06:33:33	-- common/autotest_common.sh@10 -- # set +x
00:23:16.804  [2024-12-16 06:33:33.691948] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:23:16.804   06:33:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:16.804   06:33:33	-- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009
00:23:16.804   06:33:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:16.804   06:33:33	-- common/autotest_common.sh@10 -- # set +x
00:23:16.804  [2024-12-16 06:33:33.700114] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 ***
00:23:16.804   06:33:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:16.804   06:33:33	-- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512
00:23:16.804   06:33:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:16.804   06:33:33	-- common/autotest_common.sh@10 -- # set +x
00:23:16.804  null0
00:23:16.804   06:33:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:16.804   06:33:33	-- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512
00:23:16.804   06:33:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:16.804   06:33:33	-- common/autotest_common.sh@10 -- # set +x
00:23:16.804  null1
00:23:16.804   06:33:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:16.804   06:33:33	-- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512
00:23:16.804   06:33:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:16.804   06:33:33	-- common/autotest_common.sh@10 -- # set +x
00:23:16.804  null2
00:23:16.804   06:33:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:16.804   06:33:33	-- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512
00:23:16.804   06:33:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:16.804   06:33:33	-- common/autotest_common.sh@10 -- # set +x
00:23:16.804  null3
00:23:16.804   06:33:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:16.804   06:33:33	-- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine
00:23:16.804   06:33:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:16.804   06:33:33	-- common/autotest_common.sh@10 -- # set +x
00:23:16.804   06:33:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:16.804   06:33:33	-- host/mdns_discovery.sh@47 -- # hostpid=87612
00:23:16.804   06:33:33	-- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock
00:23:16.804   06:33:33	-- host/mdns_discovery.sh@48 -- # waitforlisten 87612 /tmp/host.sock
00:23:16.804   06:33:33	-- common/autotest_common.sh@829 -- # '[' -z 87612 ']'
00:23:16.804   06:33:33	-- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock
00:23:16.804   06:33:33	-- common/autotest_common.sh@834 -- # local max_retries=100
00:23:16.804   06:33:33	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...'
00:23:16.804  Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...
00:23:16.804   06:33:33	-- common/autotest_common.sh@838 -- # xtrace_disable
00:23:16.804   06:33:33	-- common/autotest_common.sh@10 -- # set +x
00:23:17.063  [2024-12-16 06:33:33.795596] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:23:17.063  [2024-12-16 06:33:33.795659] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87612 ]
00:23:17.063  [2024-12-16 06:33:33.931097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:17.063  [2024-12-16 06:33:34.023305] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:23:17.063  [2024-12-16 06:33:34.023501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:23:17.998   06:33:34	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:23:17.998   06:33:34	-- common/autotest_common.sh@862 -- # return 0
00:23:17.998   06:33:34	-- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM
00:23:17.998   06:33:34	-- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT
00:23:17.998   06:33:34	-- host/mdns_discovery.sh@55 -- # avahi-daemon --kill
00:23:17.998   06:33:34	-- host/mdns_discovery.sh@57 -- # avahipid=87647
00:23:17.998   06:33:34	-- host/mdns_discovery.sh@58 -- # sleep 1
00:23:17.998   06:33:34	-- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63
00:23:17.998    06:33:34	-- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no'
00:23:17.998  Process 1066 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid)
00:23:17.998  Found user 'avahi' (UID 70) and group 'avahi' (GID 70).
00:23:17.998  Successfully dropped root privileges.
00:23:17.998  avahi-daemon 0.8 starting up.
00:23:17.998  WARNING: No NSS support for mDNS detected, consider installing nss-mdns!
00:23:17.998  Successfully called chroot().
00:23:17.998  Successfully dropped remaining capabilities.
00:23:17.998  No service file found in /etc/avahi/services.
00:23:18.932  Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3.
00:23:18.932  New relevant interface nvmf_tgt_if2.IPv4 for mDNS.
00:23:18.932  Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2.
00:23:18.932  New relevant interface nvmf_tgt_if.IPv4 for mDNS.
00:23:18.932  Network interface enumeration completed.
00:23:18.932  Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*.
00:23:18.932  Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4.
00:23:18.932  Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*.
00:23:18.932  Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4.
00:23:18.932  Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 3011769600.
00:23:19.190   06:33:35	-- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme
00:23:19.190   06:33:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:19.190   06:33:35	-- common/autotest_common.sh@10 -- # set +x
00:23:19.190   06:33:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:19.190   06:33:35	-- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test
00:23:19.190   06:33:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:19.190   06:33:35	-- common/autotest_common.sh@10 -- # set +x
00:23:19.190   06:33:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:19.190   06:33:35	-- host/mdns_discovery.sh@85 -- # notify_id=0
00:23:19.190    06:33:35	-- host/mdns_discovery.sh@91 -- # get_subsystem_names
00:23:19.190    06:33:35	-- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:23:19.190    06:33:35	-- host/mdns_discovery.sh@68 -- # jq -r '.[].name'
00:23:19.190    06:33:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:19.190    06:33:35	-- host/mdns_discovery.sh@68 -- # sort
00:23:19.190    06:33:35	-- common/autotest_common.sh@10 -- # set +x
00:23:19.190    06:33:35	-- host/mdns_discovery.sh@68 -- # xargs
00:23:19.190    06:33:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:19.190   06:33:36	-- host/mdns_discovery.sh@91 -- # [[ '' == '' ]]
00:23:19.191    06:33:36	-- host/mdns_discovery.sh@92 -- # get_bdev_list
00:23:19.191    06:33:36	-- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:23:19.191    06:33:36	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:19.191    06:33:36	-- host/mdns_discovery.sh@64 -- # jq -r '.[].name'
00:23:19.191    06:33:36	-- common/autotest_common.sh@10 -- # set +x
00:23:19.191    06:33:36	-- host/mdns_discovery.sh@64 -- # sort
00:23:19.191    06:33:36	-- host/mdns_discovery.sh@64 -- # xargs
00:23:19.191    06:33:36	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:19.191   06:33:36	-- host/mdns_discovery.sh@92 -- # [[ '' == '' ]]
00:23:19.191   06:33:36	-- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0
00:23:19.191   06:33:36	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:19.191   06:33:36	-- common/autotest_common.sh@10 -- # set +x
00:23:19.191   06:33:36	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:19.191    06:33:36	-- host/mdns_discovery.sh@95 -- # get_subsystem_names
00:23:19.191    06:33:36	-- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:23:19.191    06:33:36	-- host/mdns_discovery.sh@68 -- # jq -r '.[].name'
00:23:19.191    06:33:36	-- host/mdns_discovery.sh@68 -- # sort
00:23:19.191    06:33:36	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:19.191    06:33:36	-- host/mdns_discovery.sh@68 -- # xargs
00:23:19.191    06:33:36	-- common/autotest_common.sh@10 -- # set +x
00:23:19.191    06:33:36	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:19.191   06:33:36	-- host/mdns_discovery.sh@95 -- # [[ '' == '' ]]
00:23:19.191    06:33:36	-- host/mdns_discovery.sh@96 -- # get_bdev_list
00:23:19.191    06:33:36	-- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:23:19.191    06:33:36	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:19.191    06:33:36	-- common/autotest_common.sh@10 -- # set +x
00:23:19.191    06:33:36	-- host/mdns_discovery.sh@64 -- # jq -r '.[].name'
00:23:19.191    06:33:36	-- host/mdns_discovery.sh@64 -- # sort
00:23:19.191    06:33:36	-- host/mdns_discovery.sh@64 -- # xargs
00:23:19.191    06:33:36	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:19.449   06:33:36	-- host/mdns_discovery.sh@96 -- # [[ '' == '' ]]
00:23:19.449   06:33:36	-- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0
00:23:19.449   06:33:36	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:19.449   06:33:36	-- common/autotest_common.sh@10 -- # set +x
00:23:19.449   06:33:36	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:19.449    06:33:36	-- host/mdns_discovery.sh@99 -- # get_subsystem_names
00:23:19.449    06:33:36	-- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:23:19.449    06:33:36	-- host/mdns_discovery.sh@68 -- # jq -r '.[].name'
00:23:19.449    06:33:36	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:19.449    06:33:36	-- common/autotest_common.sh@10 -- # set +x
00:23:19.449    06:33:36	-- host/mdns_discovery.sh@68 -- # xargs
00:23:19.449    06:33:36	-- host/mdns_discovery.sh@68 -- # sort
00:23:19.449    06:33:36	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:19.449  [2024-12-16 06:33:36.240304] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED
00:23:19.449   06:33:36	-- host/mdns_discovery.sh@99 -- # [[ '' == '' ]]
00:23:19.449    06:33:36	-- host/mdns_discovery.sh@100 -- # get_bdev_list
00:23:19.449    06:33:36	-- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:23:19.449    06:33:36	-- host/mdns_discovery.sh@64 -- # jq -r '.[].name'
00:23:19.449    06:33:36	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:19.449    06:33:36	-- common/autotest_common.sh@10 -- # set +x
00:23:19.449    06:33:36	-- host/mdns_discovery.sh@64 -- # sort
00:23:19.449    06:33:36	-- host/mdns_discovery.sh@64 -- # xargs
00:23:19.449    06:33:36	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:19.449   06:33:36	-- host/mdns_discovery.sh@100 -- # [[ '' == '' ]]
00:23:19.449   06:33:36	-- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:23:19.449   06:33:36	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:19.449   06:33:36	-- common/autotest_common.sh@10 -- # set +x
00:23:19.449  [2024-12-16 06:33:36.302149] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:23:19.449   06:33:36	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:19.449   06:33:36	-- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test
00:23:19.449   06:33:36	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:19.449   06:33:36	-- common/autotest_common.sh@10 -- # set +x
00:23:19.449   06:33:36	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:19.449   06:33:36	-- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20
00:23:19.449   06:33:36	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:19.450   06:33:36	-- common/autotest_common.sh@10 -- # set +x
00:23:19.450   06:33:36	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:19.450   06:33:36	-- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2
00:23:19.450   06:33:36	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:19.450   06:33:36	-- common/autotest_common.sh@10 -- # set +x
00:23:19.450   06:33:36	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:19.450   06:33:36	-- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test
00:23:19.450   06:33:36	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:19.450   06:33:36	-- common/autotest_common.sh@10 -- # set +x
00:23:19.450   06:33:36	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:19.450   06:33:36	-- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009
00:23:19.450   06:33:36	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:19.450   06:33:36	-- common/autotest_common.sh@10 -- # set +x
00:23:19.450  [2024-12-16 06:33:36.342040] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 ***
00:23:19.450   06:33:36	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:19.450   06:33:36	-- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420
00:23:19.450   06:33:36	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:19.450   06:33:36	-- common/autotest_common.sh@10 -- # set +x
00:23:19.450  [2024-12-16 06:33:36.350057] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:23:19.450   06:33:36	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:19.450   06:33:36	-- host/mdns_discovery.sh@124 -- # avahi_clientpid=87699
00:23:19.450   06:33:36	-- host/mdns_discovery.sh@125 -- # sleep 5
00:23:19.450   06:33:36	-- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp
00:23:20.385  [2024-12-16 06:33:37.140305] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW
00:23:20.385  Established under name 'CDC'
00:23:20.643  [2024-12-16 06:33:37.540318] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local'
00:23:20.643  [2024-12-16 06:33:37.540488] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: 	fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3)
00:23:20.643  	TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery"
00:23:20.643  	cookie is 0
00:23:20.643  	is_local: 1
00:23:20.643  	our_own: 0
00:23:20.643  	wide_area: 0
00:23:20.643  	multicast: 1
00:23:20.643  	cached: 1
00:23:20.902  [2024-12-16 06:33:37.640310] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local'
00:23:20.902  [2024-12-16 06:33:37.640464] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: 	fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2)
00:23:20.902  	TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery"
00:23:20.902  	cookie is 0
00:23:20.902  	is_local: 1
00:23:20.902  	our_own: 0
00:23:20.902  	wide_area: 0
00:23:20.902  	multicast: 1
00:23:20.902  	cached: 1
00:23:21.837  [2024-12-16 06:33:38.546334] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached
00:23:21.837  [2024-12-16 06:33:38.546522] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected
00:23:21.837  [2024-12-16 06:33:38.546641] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command
00:23:21.837  [2024-12-16 06:33:38.632431] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0
00:23:21.837  [2024-12-16 06:33:38.645938] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached
00:23:21.837  [2024-12-16 06:33:38.646071] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected
00:23:21.837  [2024-12-16 06:33:38.646104] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command
00:23:21.837  [2024-12-16 06:33:38.691449] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done
00:23:21.837  [2024-12-16 06:33:38.691624] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again
00:23:21.837  [2024-12-16 06:33:38.733798] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0
00:23:21.837  [2024-12-16 06:33:38.795508] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done
00:23:21.837  [2024-12-16 06:33:38.795680] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@80 -- # jq -r '.[].name'
00:23:25.121    06:33:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@80 -- # sort
00:23:25.121    06:33:41	-- common/autotest_common.sh@10 -- # set +x
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@80 -- # xargs
00:23:25.121    06:33:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:25.121   06:33:41	-- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]]
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@76 -- # jq -r '.[].name'
00:23:25.121    06:33:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:25.121    06:33:41	-- common/autotest_common.sh@10 -- # set +x
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@76 -- # sort
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@76 -- # xargs
00:23:25.121    06:33:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:25.121   06:33:41	-- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]]
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@129 -- # get_subsystem_names
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@68 -- # jq -r '.[].name'
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@68 -- # sort
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@68 -- # xargs
00:23:25.121    06:33:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:25.121    06:33:41	-- common/autotest_common.sh@10 -- # set +x
00:23:25.121    06:33:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:25.121   06:33:41	-- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]]
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@130 -- # get_bdev_list
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@64 -- # jq -r '.[].name'
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:23:25.121    06:33:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@64 -- # sort
00:23:25.121    06:33:41	-- common/autotest_common.sh@10 -- # set +x
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@64 -- # xargs
00:23:25.121    06:33:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:25.121   06:33:41	-- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]]
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:23:25.121    06:33:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:25.121    06:33:41	-- common/autotest_common.sh@10 -- # set +x
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@72 -- # sort -n
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@72 -- # xargs
00:23:25.121    06:33:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:25.121   06:33:41	-- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]]
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@72 -- # sort -n
00:23:25.121    06:33:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@72 -- # xargs
00:23:25.121    06:33:41	-- common/autotest_common.sh@10 -- # set +x
00:23:25.121    06:33:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:25.121   06:33:41	-- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]]
00:23:25.121   06:33:41	-- host/mdns_discovery.sh@133 -- # get_notification_count
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0
00:23:25.121    06:33:41	-- host/mdns_discovery.sh@87 -- # jq '. | length'
00:23:25.121    06:33:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:25.121    06:33:41	-- common/autotest_common.sh@10 -- # set +x
00:23:25.121    06:33:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:25.121   06:33:41	-- host/mdns_discovery.sh@87 -- # notification_count=2
00:23:25.121   06:33:41	-- host/mdns_discovery.sh@88 -- # notify_id=2
00:23:25.121   06:33:41	-- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]]
00:23:25.121   06:33:41	-- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1
00:23:25.121   06:33:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:25.121   06:33:41	-- common/autotest_common.sh@10 -- # set +x
00:23:25.121   06:33:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:25.121   06:33:41	-- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3
00:23:25.121   06:33:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:25.121   06:33:41	-- common/autotest_common.sh@10 -- # set +x
00:23:25.121   06:33:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:25.121   06:33:41	-- host/mdns_discovery.sh@139 -- # sleep 1
00:23:26.057    06:33:42	-- host/mdns_discovery.sh@141 -- # get_bdev_list
00:23:26.057    06:33:42	-- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:23:26.057    06:33:42	-- host/mdns_discovery.sh@64 -- # jq -r '.[].name'
00:23:26.057    06:33:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:26.057    06:33:42	-- common/autotest_common.sh@10 -- # set +x
00:23:26.057    06:33:42	-- host/mdns_discovery.sh@64 -- # sort
00:23:26.057    06:33:42	-- host/mdns_discovery.sh@64 -- # xargs
00:23:26.057    06:33:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:26.057   06:33:42	-- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]]
00:23:26.057   06:33:42	-- host/mdns_discovery.sh@142 -- # get_notification_count
00:23:26.057    06:33:42	-- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2
00:23:26.057    06:33:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:26.057    06:33:42	-- common/autotest_common.sh@10 -- # set +x
00:23:26.057    06:33:42	-- host/mdns_discovery.sh@87 -- # jq '. | length'
00:23:26.057    06:33:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:26.057   06:33:42	-- host/mdns_discovery.sh@87 -- # notification_count=2
00:23:26.057   06:33:42	-- host/mdns_discovery.sh@88 -- # notify_id=4
00:23:26.057   06:33:42	-- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]]
00:23:26.057   06:33:42	-- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421
00:23:26.057   06:33:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:26.057   06:33:42	-- common/autotest_common.sh@10 -- # set +x
00:23:26.057  [2024-12-16 06:33:42.884846] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 ***
00:23:26.057  [2024-12-16 06:33:42.885391] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer
00:23:26.057  [2024-12-16 06:33:42.885420] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command
00:23:26.057  [2024-12-16 06:33:42.885453] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer
00:23:26.057  [2024-12-16 06:33:42.885465] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command
00:23:26.057   06:33:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:26.057   06:33:42	-- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421
00:23:26.057   06:33:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:26.057   06:33:42	-- common/autotest_common.sh@10 -- # set +x
00:23:26.057  [2024-12-16 06:33:42.892734] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 ***
00:23:26.057  [2024-12-16 06:33:42.893400] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer
00:23:26.057  [2024-12-16 06:33:42.893450] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer
00:23:26.057   06:33:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:26.057   06:33:42	-- host/mdns_discovery.sh@149 -- # sleep 1
00:23:26.057  [2024-12-16 06:33:43.025476] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0
00:23:26.057  [2024-12-16 06:33:43.027484] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0
00:23:26.316  [2024-12-16 06:33:43.088856] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done
00:23:26.316  [2024-12-16 06:33:43.088876] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again
00:23:26.316  [2024-12-16 06:33:43.088882] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again
00:23:26.316  [2024-12-16 06:33:43.088897] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command
00:23:26.316  [2024-12-16 06:33:43.088938] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done
00:23:26.316  [2024-12-16 06:33:43.088947] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again
00:23:26.316  [2024-12-16 06:33:43.088951] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again
00:23:26.316  [2024-12-16 06:33:43.088962] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command
00:23:26.316  [2024-12-16 06:33:43.134638] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again
00:23:26.316  [2024-12-16 06:33:43.134655] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again
00:23:26.316  [2024-12-16 06:33:43.134692] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again
00:23:26.316  [2024-12-16 06:33:43.134699] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again
00:23:27.252    06:33:43	-- host/mdns_discovery.sh@151 -- # get_subsystem_names
00:23:27.252    06:33:43	-- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:23:27.252    06:33:43	-- host/mdns_discovery.sh@68 -- # jq -r '.[].name'
00:23:27.252    06:33:43	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:27.252    06:33:43	-- common/autotest_common.sh@10 -- # set +x
00:23:27.252    06:33:43	-- host/mdns_discovery.sh@68 -- # sort
00:23:27.252    06:33:43	-- host/mdns_discovery.sh@68 -- # xargs
00:23:27.252    06:33:43	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:27.252   06:33:43	-- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]]
00:23:27.252    06:33:43	-- host/mdns_discovery.sh@152 -- # get_bdev_list
00:23:27.252    06:33:43	-- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:23:27.252    06:33:43	-- host/mdns_discovery.sh@64 -- # jq -r '.[].name'
00:23:27.252    06:33:43	-- host/mdns_discovery.sh@64 -- # sort
00:23:27.252    06:33:43	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:27.252    06:33:43	-- common/autotest_common.sh@10 -- # set +x
00:23:27.252    06:33:43	-- host/mdns_discovery.sh@64 -- # xargs
00:23:27.252    06:33:44	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:27.252   06:33:44	-- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]]
00:23:27.252    06:33:44	-- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0
00:23:27.252    06:33:44	-- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0
00:23:27.252    06:33:44	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:27.252    06:33:44	-- common/autotest_common.sh@10 -- # set +x
00:23:27.252    06:33:44	-- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:23:27.252    06:33:44	-- host/mdns_discovery.sh@72 -- # sort -n
00:23:27.252    06:33:44	-- host/mdns_discovery.sh@72 -- # xargs
00:23:27.252    06:33:44	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:27.252   06:33:44	-- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]]
00:23:27.252    06:33:44	-- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0
00:23:27.252    06:33:44	-- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0
00:23:27.252    06:33:44	-- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:23:27.252    06:33:44	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:27.252    06:33:44	-- common/autotest_common.sh@10 -- # set +x
00:23:27.252    06:33:44	-- host/mdns_discovery.sh@72 -- # xargs
00:23:27.252    06:33:44	-- host/mdns_discovery.sh@72 -- # sort -n
00:23:27.252    06:33:44	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:27.252   06:33:44	-- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]]
00:23:27.252   06:33:44	-- host/mdns_discovery.sh@155 -- # get_notification_count
00:23:27.252    06:33:44	-- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4
00:23:27.252    06:33:44	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:27.252    06:33:44	-- host/mdns_discovery.sh@87 -- # jq '. | length'
00:23:27.252    06:33:44	-- common/autotest_common.sh@10 -- # set +x
00:23:27.252    06:33:44	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:27.252   06:33:44	-- host/mdns_discovery.sh@87 -- # notification_count=0
00:23:27.252   06:33:44	-- host/mdns_discovery.sh@88 -- # notify_id=4
00:23:27.252   06:33:44	-- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]]
00:23:27.252   06:33:44	-- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:23:27.252   06:33:44	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:27.252   06:33:44	-- common/autotest_common.sh@10 -- # set +x
00:23:27.252  [2024-12-16 06:33:44.202255] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer
00:23:27.252  [2024-12-16 06:33:44.202280] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command
00:23:27.252  [2024-12-16 06:33:44.202306] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer
00:23:27.252  [2024-12-16 06:33:44.202317] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command
00:23:27.252  [2024-12-16 06:33:44.205417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:23:27.252  [2024-12-16 06:33:44.205451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:23:27.252  [2024-12-16 06:33:44.205462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:23:27.252  [2024-12-16 06:33:44.205470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:23:27.252  [2024-12-16 06:33:44.205479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:23:27.252  [2024-12-16 06:33:44.205509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:23:27.252  [2024-12-16 06:33:44.205519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:23:27.252  [2024-12-16 06:33:44.205529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:23:27.252  [2024-12-16 06:33:44.205537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb7b70 is same with the state(5) to be set
00:23:27.252   06:33:44	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:27.252   06:33:44	-- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420
00:23:27.252   06:33:44	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:27.252   06:33:44	-- common/autotest_common.sh@10 -- # set +x
00:23:27.252  [2024-12-16 06:33:44.209922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:23:27.253  [2024-12-16 06:33:44.210049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:23:27.253  [2024-12-16 06:33:44.210063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:23:27.253  [2024-12-16 06:33:44.210072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:23:27.253  [2024-12-16 06:33:44.210080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:23:27.253  [2024-12-16 06:33:44.210087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:23:27.253  [2024-12-16 06:33:44.210095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:23:27.253  [2024-12-16 06:33:44.210102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:23:27.253  [2024-12-16 06:33:44.210110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b53410 is same with the state(5) to be set
00:23:27.253  [2024-12-16 06:33:44.210280] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer
00:23:27.253  [2024-12-16 06:33:44.210324] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer
00:23:27.253   06:33:44	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:27.253   06:33:44	-- host/mdns_discovery.sh@162 -- # sleep 1
00:23:27.253  [2024-12-16 06:33:44.215395] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb7b70 (9): Bad file descriptor
00:23:27.253  [2024-12-16 06:33:44.219892] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b53410 (9): Bad file descriptor
00:23:27.253  [2024-12-16 06:33:44.225403] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller
00:23:27.253  [2024-12-16 06:33:44.225516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.253  [2024-12-16 06:33:44.225561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.253  [2024-12-16 06:33:44.225576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bb7b70 with addr=10.0.0.2, port=4420
00:23:27.253  [2024-12-16 06:33:44.225585] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb7b70 is same with the state(5) to be set
00:23:27.253  [2024-12-16 06:33:44.225600] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb7b70 (9): Bad file descriptor
00:23:27.253  [2024-12-16 06:33:44.225614] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state
00:23:27.253  [2024-12-16 06:33:44.225622] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed
00:23:27.253  [2024-12-16 06:33:44.225630] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state.
00:23:27.253  [2024-12-16 06:33:44.225644] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:23:27.513  [2024-12-16 06:33:44.229902] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller
00:23:27.513  [2024-12-16 06:33:44.229978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.513  [2024-12-16 06:33:44.230018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.513  [2024-12-16 06:33:44.230032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b53410 with addr=10.0.0.3, port=4420
00:23:27.513  [2024-12-16 06:33:44.230041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b53410 is same with the state(5) to be set
00:23:27.513  [2024-12-16 06:33:44.230055] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b53410 (9): Bad file descriptor
00:23:27.513  [2024-12-16 06:33:44.230066] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state
00:23:27.513  [2024-12-16 06:33:44.230074] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed
00:23:27.513  [2024-12-16 06:33:44.230081] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state.
00:23:27.513  [2024-12-16 06:33:44.230109] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:23:27.513  [2024-12-16 06:33:44.235450] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller
00:23:27.513  [2024-12-16 06:33:44.235538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.513  [2024-12-16 06:33:44.235577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.513  [2024-12-16 06:33:44.235591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bb7b70 with addr=10.0.0.2, port=4420
00:23:27.513  [2024-12-16 06:33:44.235600] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb7b70 is same with the state(5) to be set
00:23:27.513  [2024-12-16 06:33:44.235614] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb7b70 (9): Bad file descriptor
00:23:27.513  [2024-12-16 06:33:44.235625] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state
00:23:27.513  [2024-12-16 06:33:44.235632] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed
00:23:27.513  [2024-12-16 06:33:44.235640] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state.
00:23:27.513  [2024-12-16 06:33:44.235652] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:23:27.513  [2024-12-16 06:33:44.239948] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller
00:23:27.513  [2024-12-16 06:33:44.240014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.513  [2024-12-16 06:33:44.240052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.513  [2024-12-16 06:33:44.240065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b53410 with addr=10.0.0.3, port=4420
00:23:27.513  [2024-12-16 06:33:44.240074] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b53410 is same with the state(5) to be set
00:23:27.513  [2024-12-16 06:33:44.240088] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b53410 (9): Bad file descriptor
00:23:27.513  [2024-12-16 06:33:44.240099] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state
00:23:27.513  [2024-12-16 06:33:44.240107] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed
00:23:27.513  [2024-12-16 06:33:44.240115] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state.
00:23:27.513  [2024-12-16 06:33:44.240126] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:23:27.513  [2024-12-16 06:33:44.245500] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller
00:23:27.513  [2024-12-16 06:33:44.245577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.513  [2024-12-16 06:33:44.245614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.513  [2024-12-16 06:33:44.245628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bb7b70 with addr=10.0.0.2, port=4420
00:23:27.514  [2024-12-16 06:33:44.245637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb7b70 is same with the state(5) to be set
00:23:27.514  [2024-12-16 06:33:44.245650] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb7b70 (9): Bad file descriptor
00:23:27.514  [2024-12-16 06:33:44.245662] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state
00:23:27.514  [2024-12-16 06:33:44.245670] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed
00:23:27.514  [2024-12-16 06:33:44.245677] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state.
00:23:27.514  [2024-12-16 06:33:44.245689] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:23:27.514  [2024-12-16 06:33:44.249989] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller
00:23:27.514  [2024-12-16 06:33:44.250059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.514  [2024-12-16 06:33:44.250097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.514  [2024-12-16 06:33:44.250111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b53410 with addr=10.0.0.3, port=4420
00:23:27.514  [2024-12-16 06:33:44.250120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b53410 is same with the state(5) to be set
00:23:27.514  [2024-12-16 06:33:44.250135] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b53410 (9): Bad file descriptor
00:23:27.514  [2024-12-16 06:33:44.250147] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state
00:23:27.514  [2024-12-16 06:33:44.250154] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed
00:23:27.514  [2024-12-16 06:33:44.250162] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state.
00:23:27.514  [2024-12-16 06:33:44.250173] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:23:27.514  [2024-12-16 06:33:44.255546] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller
00:23:27.514  [2024-12-16 06:33:44.255619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.514  [2024-12-16 06:33:44.255657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.514  [2024-12-16 06:33:44.255671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bb7b70 with addr=10.0.0.2, port=4420
00:23:27.514  [2024-12-16 06:33:44.255680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb7b70 is same with the state(5) to be set
00:23:27.514  [2024-12-16 06:33:44.255702] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb7b70 (9): Bad file descriptor
00:23:27.514  [2024-12-16 06:33:44.255714] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state
00:23:27.514  [2024-12-16 06:33:44.255722] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed
00:23:27.514  [2024-12-16 06:33:44.255729] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state.
00:23:27.514  [2024-12-16 06:33:44.255741] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:23:27.514  [2024-12-16 06:33:44.260031] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller
00:23:27.514  [2024-12-16 06:33:44.260105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.514  [2024-12-16 06:33:44.260144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.514  [2024-12-16 06:33:44.260157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b53410 with addr=10.0.0.3, port=4420
00:23:27.514  [2024-12-16 06:33:44.260167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b53410 is same with the state(5) to be set
00:23:27.514  [2024-12-16 06:33:44.260180] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b53410 (9): Bad file descriptor
00:23:27.514  [2024-12-16 06:33:44.260192] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state
00:23:27.514  [2024-12-16 06:33:44.260199] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed
00:23:27.514  [2024-12-16 06:33:44.260206] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state.
00:23:27.514  [2024-12-16 06:33:44.260218] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:23:27.514  [2024-12-16 06:33:44.265591] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller
00:23:27.514  [2024-12-16 06:33:44.265659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.514  [2024-12-16 06:33:44.265696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.514  [2024-12-16 06:33:44.265709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bb7b70 with addr=10.0.0.2, port=4420
00:23:27.514  [2024-12-16 06:33:44.265718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb7b70 is same with the state(5) to be set
00:23:27.514  [2024-12-16 06:33:44.265731] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb7b70 (9): Bad file descriptor
00:23:27.514  [2024-12-16 06:33:44.265743] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state
00:23:27.514  [2024-12-16 06:33:44.265750] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed
00:23:27.514  [2024-12-16 06:33:44.265758] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state.
00:23:27.514  [2024-12-16 06:33:44.265769] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:23:27.514  [2024-12-16 06:33:44.270076] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller
00:23:27.514  [2024-12-16 06:33:44.270149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.514  [2024-12-16 06:33:44.270187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.514  [2024-12-16 06:33:44.270201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b53410 with addr=10.0.0.3, port=4420
00:23:27.514  [2024-12-16 06:33:44.270210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b53410 is same with the state(5) to be set
00:23:27.514  [2024-12-16 06:33:44.270224] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b53410 (9): Bad file descriptor
00:23:27.514  [2024-12-16 06:33:44.270235] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state
00:23:27.514  [2024-12-16 06:33:44.270244] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed
00:23:27.514  [2024-12-16 06:33:44.270251] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state.
00:23:27.514  [2024-12-16 06:33:44.270263] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:23:27.514  [2024-12-16 06:33:44.275633] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller
00:23:27.514  [2024-12-16 06:33:44.275701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.514  [2024-12-16 06:33:44.275737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.514  [2024-12-16 06:33:44.275750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bb7b70 with addr=10.0.0.2, port=4420
00:23:27.514  [2024-12-16 06:33:44.275759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb7b70 is same with the state(5) to be set
00:23:27.514  [2024-12-16 06:33:44.275772] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb7b70 (9): Bad file descriptor
00:23:27.514  [2024-12-16 06:33:44.275784] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state
00:23:27.514  [2024-12-16 06:33:44.275791] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed
00:23:27.514  [2024-12-16 06:33:44.275799] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state.
00:23:27.514  [2024-12-16 06:33:44.275810] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:23:27.514  [2024-12-16 06:33:44.280119] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller
00:23:27.514  [2024-12-16 06:33:44.280185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.514  [2024-12-16 06:33:44.280222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.514  [2024-12-16 06:33:44.280236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b53410 with addr=10.0.0.3, port=4420
00:23:27.514  [2024-12-16 06:33:44.280245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b53410 is same with the state(5) to be set
00:23:27.514  [2024-12-16 06:33:44.280258] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b53410 (9): Bad file descriptor
00:23:27.514  [2024-12-16 06:33:44.280269] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state
00:23:27.514  [2024-12-16 06:33:44.280276] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed
00:23:27.514  [2024-12-16 06:33:44.280284] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state.
00:23:27.514  [2024-12-16 06:33:44.280296] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:23:27.514  [2024-12-16 06:33:44.285678] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller
00:23:27.514  [2024-12-16 06:33:44.285743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.514  [2024-12-16 06:33:44.285779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.514  [2024-12-16 06:33:44.285792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bb7b70 with addr=10.0.0.2, port=4420
00:23:27.514  [2024-12-16 06:33:44.285802] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb7b70 is same with the state(5) to be set
00:23:27.514  [2024-12-16 06:33:44.285815] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb7b70 (9): Bad file descriptor
00:23:27.514  [2024-12-16 06:33:44.285827] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state
00:23:27.514  [2024-12-16 06:33:44.285834] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed
00:23:27.514  [2024-12-16 06:33:44.285841] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state.
00:23:27.514  [2024-12-16 06:33:44.285853] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:23:27.514  [2024-12-16 06:33:44.290161] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller
00:23:27.514  [2024-12-16 06:33:44.290227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.514  [2024-12-16 06:33:44.290265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.514  [2024-12-16 06:33:44.290279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b53410 with addr=10.0.0.3, port=4420
00:23:27.514  [2024-12-16 06:33:44.290288] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b53410 is same with the state(5) to be set
00:23:27.514  [2024-12-16 06:33:44.290301] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b53410 (9): Bad file descriptor
00:23:27.514  [2024-12-16 06:33:44.290313] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state
00:23:27.514  [2024-12-16 06:33:44.290320] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed
00:23:27.514  [2024-12-16 06:33:44.290327] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state.
00:23:27.514  [2024-12-16 06:33:44.290353] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:23:27.515  [2024-12-16 06:33:44.295720] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller
00:23:27.515  [2024-12-16 06:33:44.295789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.515  [2024-12-16 06:33:44.295826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.515  [2024-12-16 06:33:44.295839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bb7b70 with addr=10.0.0.2, port=4420
00:23:27.515  [2024-12-16 06:33:44.295848] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb7b70 is same with the state(5) to be set
00:23:27.515  [2024-12-16 06:33:44.295861] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb7b70 (9): Bad file descriptor
00:23:27.515  [2024-12-16 06:33:44.295873] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state
00:23:27.515  [2024-12-16 06:33:44.295880] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed
00:23:27.515  [2024-12-16 06:33:44.295887] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state.
00:23:27.515  [2024-12-16 06:33:44.295899] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:23:27.515  [2024-12-16 06:33:44.300205] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller
00:23:27.515  [2024-12-16 06:33:44.300281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.515  [2024-12-16 06:33:44.300320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.515  [2024-12-16 06:33:44.300334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b53410 with addr=10.0.0.3, port=4420
00:23:27.515  [2024-12-16 06:33:44.300343] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b53410 is same with the state(5) to be set
00:23:27.515  [2024-12-16 06:33:44.300357] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b53410 (9): Bad file descriptor
00:23:27.515  [2024-12-16 06:33:44.300408] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state
00:23:27.515  [2024-12-16 06:33:44.300420] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed
00:23:27.515  [2024-12-16 06:33:44.300428] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state.
00:23:27.515  [2024-12-16 06:33:44.300440] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:23:27.515  [2024-12-16 06:33:44.305766] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller
00:23:27.515  [2024-12-16 06:33:44.305840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.515  [2024-12-16 06:33:44.305878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.515  [2024-12-16 06:33:44.305891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bb7b70 with addr=10.0.0.2, port=4420
00:23:27.515  [2024-12-16 06:33:44.305901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb7b70 is same with the state(5) to be set
00:23:27.515  [2024-12-16 06:33:44.305914] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb7b70 (9): Bad file descriptor
00:23:27.515  [2024-12-16 06:33:44.305926] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state
00:23:27.515  [2024-12-16 06:33:44.305933] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed
00:23:27.515  [2024-12-16 06:33:44.305941] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state.
00:23:27.515  [2024-12-16 06:33:44.305952] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:23:27.515  [2024-12-16 06:33:44.310250] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller
00:23:27.515  [2024-12-16 06:33:44.310317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.515  [2024-12-16 06:33:44.310354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.515  [2024-12-16 06:33:44.310367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b53410 with addr=10.0.0.3, port=4420
00:23:27.515  [2024-12-16 06:33:44.310376] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b53410 is same with the state(5) to be set
00:23:27.515  [2024-12-16 06:33:44.310389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b53410 (9): Bad file descriptor
00:23:27.515  [2024-12-16 06:33:44.310414] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state
00:23:27.515  [2024-12-16 06:33:44.310423] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed
00:23:27.515  [2024-12-16 06:33:44.310430] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state.
00:23:27.515  [2024-12-16 06:33:44.310442] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:23:27.515  [2024-12-16 06:33:44.315810] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller
00:23:27.515  [2024-12-16 06:33:44.315891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.515  [2024-12-16 06:33:44.315929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.515  [2024-12-16 06:33:44.315943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bb7b70 with addr=10.0.0.2, port=4420
00:23:27.515  [2024-12-16 06:33:44.315952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb7b70 is same with the state(5) to be set
00:23:27.515  [2024-12-16 06:33:44.315965] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb7b70 (9): Bad file descriptor
00:23:27.515  [2024-12-16 06:33:44.315976] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state
00:23:27.515  [2024-12-16 06:33:44.315983] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed
00:23:27.515  [2024-12-16 06:33:44.315991] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state.
00:23:27.515  [2024-12-16 06:33:44.316002] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:23:27.515  [2024-12-16 06:33:44.320293] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller
00:23:27.515  [2024-12-16 06:33:44.320360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.515  [2024-12-16 06:33:44.320398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.515  [2024-12-16 06:33:44.320412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b53410 with addr=10.0.0.3, port=4420
00:23:27.515  [2024-12-16 06:33:44.320421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b53410 is same with the state(5) to be set
00:23:27.515  [2024-12-16 06:33:44.320434] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b53410 (9): Bad file descriptor
00:23:27.515  [2024-12-16 06:33:44.320460] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state
00:23:27.515  [2024-12-16 06:33:44.320469] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed
00:23:27.515  [2024-12-16 06:33:44.320476] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state.
00:23:27.515  [2024-12-16 06:33:44.320499] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:23:27.515  [2024-12-16 06:33:44.325864] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller
00:23:27.515  [2024-12-16 06:33:44.325939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.515  [2024-12-16 06:33:44.325975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.515  [2024-12-16 06:33:44.325989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bb7b70 with addr=10.0.0.2, port=4420
00:23:27.515  [2024-12-16 06:33:44.325998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb7b70 is same with the state(5) to be set
00:23:27.515  [2024-12-16 06:33:44.326011] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb7b70 (9): Bad file descriptor
00:23:27.515  [2024-12-16 06:33:44.326023] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state
00:23:27.515  [2024-12-16 06:33:44.326030] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed
00:23:27.515  [2024-12-16 06:33:44.326038] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state.
00:23:27.515  [2024-12-16 06:33:44.326050] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:23:27.515  [2024-12-16 06:33:44.330336] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller
00:23:27.515  [2024-12-16 06:33:44.330409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.515  [2024-12-16 06:33:44.330446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.515  [2024-12-16 06:33:44.330460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b53410 with addr=10.0.0.3, port=4420
00:23:27.515  [2024-12-16 06:33:44.330469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b53410 is same with the state(5) to be set
00:23:27.515  [2024-12-16 06:33:44.330529] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b53410 (9): Bad file descriptor
00:23:27.515  [2024-12-16 06:33:44.330560] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state
00:23:27.515  [2024-12-16 06:33:44.330570] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed
00:23:27.515  [2024-12-16 06:33:44.330578] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state.
00:23:27.515  [2024-12-16 06:33:44.330591] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:23:27.515  [2024-12-16 06:33:44.335908] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller
00:23:27.515  [2024-12-16 06:33:44.335974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.515  [2024-12-16 06:33:44.336011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.515  [2024-12-16 06:33:44.336025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bb7b70 with addr=10.0.0.2, port=4420
00:23:27.515  [2024-12-16 06:33:44.336039] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb7b70 is same with the state(5) to be set
00:23:27.515  [2024-12-16 06:33:44.336052] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb7b70 (9): Bad file descriptor
00:23:27.515  [2024-12-16 06:33:44.336064] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state
00:23:27.515  [2024-12-16 06:33:44.336071] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed
00:23:27.515  [2024-12-16 06:33:44.336079] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state.
00:23:27.515  [2024-12-16 06:33:44.336090] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:23:27.515  [2024-12-16 06:33:44.340377] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller
00:23:27.515  [2024-12-16 06:33:44.340443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.515  [2024-12-16 06:33:44.340481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:23:27.515  [2024-12-16 06:33:44.340507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b53410 with addr=10.0.0.3, port=4420
00:23:27.515  [2024-12-16 06:33:44.340518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b53410 is same with the state(5) to be set
00:23:27.515  [2024-12-16 06:33:44.340532] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b53410 (9): Bad file descriptor
00:23:27.516  [2024-12-16 06:33:44.340564] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found
00:23:27.516  [2024-12-16 06:33:44.340581] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again
00:23:27.516  [2024-12-16 06:33:44.340597] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command
00:23:27.516  [2024-12-16 06:33:44.340623] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state
00:23:27.516  [2024-12-16 06:33:44.340634] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed
00:23:27.516  [2024-12-16 06:33:44.340642] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state.
00:23:27.516  [2024-12-16 06:33:44.340659] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:23:27.516  [2024-12-16 06:33:44.342569] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found
00:23:27.516  [2024-12-16 06:33:44.342595] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again
00:23:27.516  [2024-12-16 06:33:44.342612] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command
00:23:27.516  [2024-12-16 06:33:44.426636] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again
00:23:27.516  [2024-12-16 06:33:44.428639] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again
00:23:28.451    06:33:45	-- host/mdns_discovery.sh@164 -- # get_subsystem_names
00:23:28.451    06:33:45	-- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:23:28.451    06:33:45	-- host/mdns_discovery.sh@68 -- # jq -r '.[].name'
00:23:28.451    06:33:45	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:28.451    06:33:45	-- common/autotest_common.sh@10 -- # set +x
00:23:28.451    06:33:45	-- host/mdns_discovery.sh@68 -- # sort
00:23:28.451    06:33:45	-- host/mdns_discovery.sh@68 -- # xargs
00:23:28.451    06:33:45	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:28.451   06:33:45	-- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]]
00:23:28.451    06:33:45	-- host/mdns_discovery.sh@165 -- # get_bdev_list
00:23:28.451    06:33:45	-- host/mdns_discovery.sh@64 -- # jq -r '.[].name'
00:23:28.451    06:33:45	-- host/mdns_discovery.sh@64 -- # sort
00:23:28.451    06:33:45	-- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:23:28.451    06:33:45	-- host/mdns_discovery.sh@64 -- # xargs
00:23:28.451    06:33:45	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:28.451    06:33:45	-- common/autotest_common.sh@10 -- # set +x
00:23:28.451    06:33:45	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:28.451   06:33:45	-- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]]
00:23:28.451    06:33:45	-- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0
00:23:28.451    06:33:45	-- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0
00:23:28.451    06:33:45	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:28.451    06:33:45	-- common/autotest_common.sh@10 -- # set +x
00:23:28.451    06:33:45	-- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:23:28.451    06:33:45	-- host/mdns_discovery.sh@72 -- # sort -n
00:23:28.451    06:33:45	-- host/mdns_discovery.sh@72 -- # xargs
00:23:28.451    06:33:45	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:28.451   06:33:45	-- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]]
00:23:28.451    06:33:45	-- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0
00:23:28.451    06:33:45	-- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0
00:23:28.451    06:33:45	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:28.451    06:33:45	-- common/autotest_common.sh@10 -- # set +x
00:23:28.451    06:33:45	-- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:23:28.451    06:33:45	-- host/mdns_discovery.sh@72 -- # sort -n
00:23:28.451    06:33:45	-- host/mdns_discovery.sh@72 -- # xargs
00:23:28.451    06:33:45	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:28.709   06:33:45	-- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]]
00:23:28.709   06:33:45	-- host/mdns_discovery.sh@168 -- # get_notification_count
00:23:28.709    06:33:45	-- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4
00:23:28.709    06:33:45	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:28.709    06:33:45	-- common/autotest_common.sh@10 -- # set +x
00:23:28.709    06:33:45	-- host/mdns_discovery.sh@87 -- # jq '. | length'
00:23:28.709    06:33:45	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:28.709   06:33:45	-- host/mdns_discovery.sh@87 -- # notification_count=0
00:23:28.709   06:33:45	-- host/mdns_discovery.sh@88 -- # notify_id=4
00:23:28.709   06:33:45	-- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]]
00:23:28.709   06:33:45	-- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns
00:23:28.709   06:33:45	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:28.709   06:33:45	-- common/autotest_common.sh@10 -- # set +x
00:23:28.709   06:33:45	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:28.709   06:33:45	-- host/mdns_discovery.sh@172 -- # sleep 1
00:23:28.709  [2024-12-16 06:33:45.540314] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp
00:23:29.643    06:33:46	-- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs
00:23:29.643    06:33:46	-- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info
00:23:29.643    06:33:46	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:29.643    06:33:46	-- host/mdns_discovery.sh@80 -- # jq -r '.[].name'
00:23:29.643    06:33:46	-- common/autotest_common.sh@10 -- # set +x
00:23:29.643    06:33:46	-- host/mdns_discovery.sh@80 -- # xargs
00:23:29.643    06:33:46	-- host/mdns_discovery.sh@80 -- # sort
00:23:29.643    06:33:46	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:29.643   06:33:46	-- host/mdns_discovery.sh@174 -- # [[ '' == '' ]]
00:23:29.643    06:33:46	-- host/mdns_discovery.sh@175 -- # get_subsystem_names
00:23:29.643    06:33:46	-- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:23:29.643    06:33:46	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:29.643    06:33:46	-- common/autotest_common.sh@10 -- # set +x
00:23:29.643    06:33:46	-- host/mdns_discovery.sh@68 -- # jq -r '.[].name'
00:23:29.643    06:33:46	-- host/mdns_discovery.sh@68 -- # sort
00:23:29.643    06:33:46	-- host/mdns_discovery.sh@68 -- # xargs
00:23:29.643    06:33:46	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:29.902   06:33:46	-- host/mdns_discovery.sh@175 -- # [[ '' == '' ]]
00:23:29.902    06:33:46	-- host/mdns_discovery.sh@176 -- # get_bdev_list
00:23:29.902    06:33:46	-- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:23:29.902    06:33:46	-- host/mdns_discovery.sh@64 -- # sort
00:23:29.902    06:33:46	-- host/mdns_discovery.sh@64 -- # jq -r '.[].name'
00:23:29.902    06:33:46	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:29.902    06:33:46	-- common/autotest_common.sh@10 -- # set +x
00:23:29.902    06:33:46	-- host/mdns_discovery.sh@64 -- # xargs
00:23:29.902    06:33:46	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:29.902   06:33:46	-- host/mdns_discovery.sh@176 -- # [[ '' == '' ]]
00:23:29.902   06:33:46	-- host/mdns_discovery.sh@177 -- # get_notification_count
00:23:29.902    06:33:46	-- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4
00:23:29.902    06:33:46	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:29.902    06:33:46	-- common/autotest_common.sh@10 -- # set +x
00:23:29.902    06:33:46	-- host/mdns_discovery.sh@87 -- # jq '. | length'
00:23:29.902    06:33:46	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:29.902   06:33:46	-- host/mdns_discovery.sh@87 -- # notification_count=4
00:23:29.902   06:33:46	-- host/mdns_discovery.sh@88 -- # notify_id=8
00:23:29.902   06:33:46	-- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]]
00:23:29.902   06:33:46	-- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test
00:23:29.902   06:33:46	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:29.902   06:33:46	-- common/autotest_common.sh@10 -- # set +x
00:23:29.902   06:33:46	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:29.902   06:33:46	-- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test
00:23:29.902   06:33:46	-- common/autotest_common.sh@650 -- # local es=0
00:23:29.902   06:33:46	-- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test
00:23:29.902   06:33:46	-- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:23:29.902   06:33:46	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:23:29.902    06:33:46	-- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:23:29.903   06:33:46	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:23:29.903   06:33:46	-- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test
00:23:29.903   06:33:46	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:29.903   06:33:46	-- common/autotest_common.sh@10 -- # set +x
00:23:29.903  [2024-12-16 06:33:46.748381] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns
00:23:29.903  2024/12/16 06:33:46 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists
00:23:29.903  request:
00:23:29.903  {
00:23:29.903  "method": "bdev_nvme_start_mdns_discovery",
00:23:29.903  "params": {
00:23:29.903  "name": "mdns",
00:23:29.903  "svcname": "_nvme-disc._http",
00:23:29.903  "hostnqn": "nqn.2021-12.io.spdk:test"
00:23:29.903  }
00:23:29.903  }
00:23:29.903  Got JSON-RPC error response
00:23:29.903  GoRPCClient: error on JSON-RPC call
00:23:29.903   06:33:46	-- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:23:29.903   06:33:46	-- common/autotest_common.sh@653 -- # es=1
00:23:29.903   06:33:46	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:23:29.903   06:33:46	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:23:29.903   06:33:46	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:23:29.903   06:33:46	-- host/mdns_discovery.sh@183 -- # sleep 5
00:23:30.470  [2024-12-16 06:33:47.137003] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED
00:23:30.470  [2024-12-16 06:33:47.237001] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW
00:23:30.470  [2024-12-16 06:33:47.337005] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local'
00:23:30.470  [2024-12-16 06:33:47.337024] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: 	fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3)
00:23:30.470  	TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery"
00:23:30.470  	cookie is 0
00:23:30.470  	is_local: 1
00:23:30.470  	our_own: 0
00:23:30.470  	wide_area: 0
00:23:30.470  	multicast: 1
00:23:30.470  	cached: 1
00:23:30.470  [2024-12-16 06:33:47.437006] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local'
00:23:30.470  [2024-12-16 06:33:47.437025] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: 	fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2)
00:23:30.470  	TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery"
00:23:30.470  	cookie is 0
00:23:30.470  	is_local: 1
00:23:30.470  	our_own: 0
00:23:30.470  	wide_area: 0
00:23:30.470  	multicast: 1
00:23:30.470  	cached: 1
00:23:31.405  [2024-12-16 06:33:48.347936] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached
00:23:31.405  [2024-12-16 06:33:48.347956] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected
00:23:31.405  [2024-12-16 06:33:48.347971] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command
00:23:31.663  [2024-12-16 06:33:48.434032] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0
00:23:31.663  [2024-12-16 06:33:48.447794] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached
00:23:31.663  [2024-12-16 06:33:48.447812] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected
00:23:31.663  [2024-12-16 06:33:48.447826] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command
00:23:31.663  [2024-12-16 06:33:48.501326] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done
00:23:31.663  [2024-12-16 06:33:48.501351] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again
00:23:31.663  [2024-12-16 06:33:48.534508] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0
00:23:31.663  [2024-12-16 06:33:48.593059] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done
00:23:31.663  [2024-12-16 06:33:48.593082] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again
00:23:34.948    06:33:51	-- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs
00:23:34.948    06:33:51	-- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info
00:23:34.948    06:33:51	-- host/mdns_discovery.sh@80 -- # jq -r '.[].name'
00:23:34.948    06:33:51	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:34.948    06:33:51	-- common/autotest_common.sh@10 -- # set +x
00:23:34.948    06:33:51	-- host/mdns_discovery.sh@80 -- # sort
00:23:34.948    06:33:51	-- host/mdns_discovery.sh@80 -- # xargs
00:23:34.948    06:33:51	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:34.948   06:33:51	-- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]]
00:23:34.948    06:33:51	-- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs
00:23:34.948    06:33:51	-- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info
00:23:34.948    06:33:51	-- host/mdns_discovery.sh@76 -- # jq -r '.[].name'
00:23:34.948    06:33:51	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:34.948    06:33:51	-- host/mdns_discovery.sh@76 -- # sort
00:23:34.948    06:33:51	-- common/autotest_common.sh@10 -- # set +x
00:23:34.948    06:33:51	-- host/mdns_discovery.sh@76 -- # xargs
00:23:34.948    06:33:51	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:34.948   06:33:51	-- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]]
00:23:34.948    06:33:51	-- host/mdns_discovery.sh@187 -- # get_bdev_list
00:23:34.948    06:33:51	-- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:23:34.948    06:33:51	-- host/mdns_discovery.sh@64 -- # jq -r '.[].name'
00:23:34.948    06:33:51	-- host/mdns_discovery.sh@64 -- # sort
00:23:34.948    06:33:51	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:34.948    06:33:51	-- common/autotest_common.sh@10 -- # set +x
00:23:34.948    06:33:51	-- host/mdns_discovery.sh@64 -- # xargs
00:23:34.948    06:33:51	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:35.207   06:33:51	-- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]]
00:23:35.207   06:33:51	-- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test
00:23:35.207   06:33:51	-- common/autotest_common.sh@650 -- # local es=0
00:23:35.207   06:33:51	-- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test
00:23:35.207   06:33:51	-- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:23:35.207   06:33:51	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:23:35.207    06:33:51	-- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:23:35.207   06:33:51	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:23:35.207   06:33:51	-- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test
00:23:35.207   06:33:51	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:35.207   06:33:51	-- common/autotest_common.sh@10 -- # set +x
00:23:35.207  [2024-12-16 06:33:51.935856] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp
00:23:35.207  2024/12/16 06:33:51 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists
00:23:35.207  request:
00:23:35.207  {
00:23:35.207  "method": "bdev_nvme_start_mdns_discovery",
00:23:35.207  "params": {
00:23:35.207  "name": "cdc",
00:23:35.207  "svcname": "_nvme-disc._tcp",
00:23:35.207  "hostnqn": "nqn.2021-12.io.spdk:test"
00:23:35.207  }
00:23:35.207  }
00:23:35.207  Got JSON-RPC error response
00:23:35.207  GoRPCClient: error on JSON-RPC call
00:23:35.207   06:33:51	-- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:23:35.207   06:33:51	-- common/autotest_common.sh@653 -- # es=1
00:23:35.207   06:33:51	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:23:35.207   06:33:51	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:23:35.207   06:33:51	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:23:35.207    06:33:51	-- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs
00:23:35.207    06:33:51	-- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info
00:23:35.207    06:33:51	-- host/mdns_discovery.sh@76 -- # jq -r '.[].name'
00:23:35.207    06:33:51	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:35.207    06:33:51	-- host/mdns_discovery.sh@76 -- # sort
00:23:35.207    06:33:51	-- common/autotest_common.sh@10 -- # set +x
00:23:35.207    06:33:51	-- host/mdns_discovery.sh@76 -- # xargs
00:23:35.207    06:33:51	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:35.207   06:33:51	-- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]]
00:23:35.207    06:33:51	-- host/mdns_discovery.sh@192 -- # get_bdev_list
00:23:35.207    06:33:51	-- host/mdns_discovery.sh@64 -- # xargs
00:23:35.207    06:33:51	-- host/mdns_discovery.sh@64 -- # jq -r '.[].name'
00:23:35.207    06:33:51	-- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:23:35.207    06:33:51	-- host/mdns_discovery.sh@64 -- # sort
00:23:35.207    06:33:51	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:35.207    06:33:51	-- common/autotest_common.sh@10 -- # set +x
00:23:35.207    06:33:52	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:35.207   06:33:52	-- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]]
00:23:35.207   06:33:52	-- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns
00:23:35.207   06:33:52	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:35.207   06:33:52	-- common/autotest_common.sh@10 -- # set +x
00:23:35.207   06:33:52	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:35.207   06:33:52	-- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT
00:23:35.207   06:33:52	-- host/mdns_discovery.sh@197 -- # kill 87612
00:23:35.207   06:33:52	-- host/mdns_discovery.sh@200 -- # wait 87612
00:23:35.207  [2024-12-16 06:33:52.158539] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp
00:23:35.466   06:33:52	-- host/mdns_discovery.sh@201 -- # kill 87699
00:23:35.466  Got SIGTERM, quitting.
00:23:35.466   06:33:52	-- host/mdns_discovery.sh@202 -- # kill 87647
00:23:35.466  Got SIGTERM, quitting.
00:23:35.466   06:33:52	-- host/mdns_discovery.sh@203 -- # nvmftestfini
00:23:35.466   06:33:52	-- nvmf/common.sh@476 -- # nvmfcleanup
00:23:35.466   06:33:52	-- nvmf/common.sh@116 -- # sync
00:23:35.466  Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3.
00:23:35.466  Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2.
00:23:35.466  avahi-daemon 0.8 exiting.
00:23:35.466   06:33:52	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:23:35.466   06:33:52	-- nvmf/common.sh@119 -- # set +e
00:23:35.466   06:33:52	-- nvmf/common.sh@120 -- # for i in {1..20}
00:23:35.466   06:33:52	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:23:35.466  rmmod nvme_tcp
00:23:35.466  rmmod nvme_fabrics
00:23:35.466  rmmod nvme_keyring
00:23:35.466   06:33:52	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:23:35.466   06:33:52	-- nvmf/common.sh@123 -- # set -e
00:23:35.466   06:33:52	-- nvmf/common.sh@124 -- # return 0
00:23:35.466   06:33:52	-- nvmf/common.sh@477 -- # '[' -n 87561 ']'
00:23:35.466   06:33:52	-- nvmf/common.sh@478 -- # killprocess 87561
00:23:35.466   06:33:52	-- common/autotest_common.sh@936 -- # '[' -z 87561 ']'
00:23:35.466   06:33:52	-- common/autotest_common.sh@940 -- # kill -0 87561
00:23:35.466    06:33:52	-- common/autotest_common.sh@941 -- # uname
00:23:35.466   06:33:52	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:23:35.466    06:33:52	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87561
00:23:35.466   06:33:52	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:23:35.466   06:33:52	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:23:35.466  killing process with pid 87561
00:23:35.466   06:33:52	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 87561'
00:23:35.466   06:33:52	-- common/autotest_common.sh@955 -- # kill 87561
00:23:35.466   06:33:52	-- common/autotest_common.sh@960 -- # wait 87561
00:23:36.032   06:33:52	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:23:36.033   06:33:52	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:23:36.033   06:33:52	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:23:36.033   06:33:52	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:23:36.033   06:33:52	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:23:36.033   06:33:52	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:23:36.033   06:33:52	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:23:36.033    06:33:52	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:23:36.033   06:33:52	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:23:36.033  
00:23:36.033  real	0m20.870s
00:23:36.033  user	0m40.590s
00:23:36.033  sys	0m2.046s
00:23:36.033   06:33:52	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:23:36.033  ************************************
00:23:36.033   06:33:52	-- common/autotest_common.sh@10 -- # set +x
00:23:36.033  END TEST nvmf_mdns_discovery
00:23:36.033  ************************************
00:23:36.033   06:33:52	-- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]]
00:23:36.033   06:33:52	-- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp
00:23:36.033   06:33:52	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:23:36.033   06:33:52	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:23:36.033   06:33:52	-- common/autotest_common.sh@10 -- # set +x
00:23:36.033  ************************************
00:23:36.033  START TEST nvmf_multipath
00:23:36.033  ************************************
00:23:36.033   06:33:52	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp
00:23:36.033  * Looking for test storage...
00:23:36.033  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:23:36.033    06:33:52	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:23:36.033     06:33:52	-- common/autotest_common.sh@1690 -- # lcov --version
00:23:36.033     06:33:52	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:23:36.033    06:33:52	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:23:36.033    06:33:52	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:23:36.033    06:33:52	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:23:36.033    06:33:52	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:23:36.033    06:33:52	-- scripts/common.sh@335 -- # IFS=.-:
00:23:36.033    06:33:52	-- scripts/common.sh@335 -- # read -ra ver1
00:23:36.033    06:33:52	-- scripts/common.sh@336 -- # IFS=.-:
00:23:36.033    06:33:52	-- scripts/common.sh@336 -- # read -ra ver2
00:23:36.033    06:33:52	-- scripts/common.sh@337 -- # local 'op=<'
00:23:36.033    06:33:52	-- scripts/common.sh@339 -- # ver1_l=2
00:23:36.033    06:33:52	-- scripts/common.sh@340 -- # ver2_l=1
00:23:36.033    06:33:52	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:23:36.033    06:33:52	-- scripts/common.sh@343 -- # case "$op" in
00:23:36.033    06:33:52	-- scripts/common.sh@344 -- # : 1
00:23:36.033    06:33:52	-- scripts/common.sh@363 -- # (( v = 0 ))
00:23:36.033    06:33:52	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:23:36.033     06:33:52	-- scripts/common.sh@364 -- # decimal 1
00:23:36.033     06:33:52	-- scripts/common.sh@352 -- # local d=1
00:23:36.033     06:33:52	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:23:36.033     06:33:52	-- scripts/common.sh@354 -- # echo 1
00:23:36.033    06:33:52	-- scripts/common.sh@364 -- # ver1[v]=1
00:23:36.033     06:33:52	-- scripts/common.sh@365 -- # decimal 2
00:23:36.033     06:33:53	-- scripts/common.sh@352 -- # local d=2
00:23:36.033     06:33:53	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:23:36.033     06:33:53	-- scripts/common.sh@354 -- # echo 2
00:23:36.033    06:33:53	-- scripts/common.sh@365 -- # ver2[v]=2
00:23:36.033    06:33:53	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:23:36.033    06:33:53	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:23:36.033    06:33:53	-- scripts/common.sh@367 -- # return 0
00:23:36.033    06:33:53	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:23:36.033    06:33:53	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:23:36.033  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:36.033  		--rc genhtml_branch_coverage=1
00:23:36.033  		--rc genhtml_function_coverage=1
00:23:36.033  		--rc genhtml_legend=1
00:23:36.033  		--rc geninfo_all_blocks=1
00:23:36.033  		--rc geninfo_unexecuted_blocks=1
00:23:36.033  		
00:23:36.033  		'
00:23:36.033    06:33:53	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:23:36.033  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:36.033  		--rc genhtml_branch_coverage=1
00:23:36.033  		--rc genhtml_function_coverage=1
00:23:36.033  		--rc genhtml_legend=1
00:23:36.033  		--rc geninfo_all_blocks=1
00:23:36.033  		--rc geninfo_unexecuted_blocks=1
00:23:36.033  		
00:23:36.033  		'
00:23:36.033    06:33:53	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:23:36.033  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:36.033  		--rc genhtml_branch_coverage=1
00:23:36.033  		--rc genhtml_function_coverage=1
00:23:36.033  		--rc genhtml_legend=1
00:23:36.033  		--rc geninfo_all_blocks=1
00:23:36.033  		--rc geninfo_unexecuted_blocks=1
00:23:36.033  		
00:23:36.033  		'
00:23:36.033    06:33:53	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:23:36.033  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:36.033  		--rc genhtml_branch_coverage=1
00:23:36.033  		--rc genhtml_function_coverage=1
00:23:36.033  		--rc genhtml_legend=1
00:23:36.033  		--rc geninfo_all_blocks=1
00:23:36.033  		--rc geninfo_unexecuted_blocks=1
00:23:36.033  		
00:23:36.033  		'
00:23:36.033   06:33:53	-- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:23:36.033     06:33:53	-- nvmf/common.sh@7 -- # uname -s
00:23:36.292    06:33:53	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:23:36.292    06:33:53	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:23:36.292    06:33:53	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:23:36.292    06:33:53	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:23:36.292    06:33:53	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:23:36.292    06:33:53	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:23:36.292    06:33:53	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:23:36.292    06:33:53	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:23:36.292    06:33:53	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:23:36.292     06:33:53	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:23:36.292    06:33:53	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:23:36.292    06:33:53	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:23:36.292    06:33:53	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:23:36.292    06:33:53	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:23:36.292    06:33:53	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:23:36.292    06:33:53	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:23:36.292     06:33:53	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:23:36.292     06:33:53	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:23:36.292     06:33:53	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:23:36.292      06:33:53	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:36.292      06:33:53	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:36.292      06:33:53	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:36.292      06:33:53	-- paths/export.sh@5 -- # export PATH
00:23:36.292      06:33:53	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:36.292    06:33:53	-- nvmf/common.sh@46 -- # : 0
00:23:36.292    06:33:53	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:23:36.292    06:33:53	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:23:36.292    06:33:53	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:23:36.292    06:33:53	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:23:36.292    06:33:53	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:23:36.292    06:33:53	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:23:36.292    06:33:53	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:23:36.292    06:33:53	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:23:36.292   06:33:53	-- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64
00:23:36.292   06:33:53	-- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:23:36.292   06:33:53	-- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:23:36.292   06:33:53	-- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh
00:23:36.293   06:33:53	-- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:23:36.293   06:33:53	-- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1
00:23:36.293   06:33:53	-- host/multipath.sh@30 -- # nvmftestinit
00:23:36.293   06:33:53	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:23:36.293   06:33:53	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:23:36.293   06:33:53	-- nvmf/common.sh@436 -- # prepare_net_devs
00:23:36.293   06:33:53	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:23:36.293   06:33:53	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:23:36.293   06:33:53	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:23:36.293   06:33:53	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:23:36.293    06:33:53	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:23:36.293   06:33:53	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:23:36.293   06:33:53	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:23:36.293   06:33:53	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:23:36.293   06:33:53	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:23:36.293   06:33:53	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:23:36.293   06:33:53	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:23:36.293   06:33:53	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:23:36.293   06:33:53	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:23:36.293   06:33:53	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:23:36.293   06:33:53	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:23:36.293   06:33:53	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:23:36.293   06:33:53	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:23:36.293   06:33:53	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:23:36.293   06:33:53	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:23:36.293   06:33:53	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:23:36.293   06:33:53	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:23:36.293   06:33:53	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:23:36.293   06:33:53	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:23:36.293   06:33:53	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:23:36.293   06:33:53	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:23:36.293  Cannot find device "nvmf_tgt_br"
00:23:36.293   06:33:53	-- nvmf/common.sh@154 -- # true
00:23:36.293   06:33:53	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:23:36.293  Cannot find device "nvmf_tgt_br2"
00:23:36.293   06:33:53	-- nvmf/common.sh@155 -- # true
00:23:36.293   06:33:53	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:23:36.293   06:33:53	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:23:36.293  Cannot find device "nvmf_tgt_br"
00:23:36.293   06:33:53	-- nvmf/common.sh@157 -- # true
00:23:36.293   06:33:53	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:23:36.293  Cannot find device "nvmf_tgt_br2"
00:23:36.293   06:33:53	-- nvmf/common.sh@158 -- # true
00:23:36.293   06:33:53	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:23:36.293   06:33:53	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:23:36.293   06:33:53	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:23:36.293  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:23:36.293   06:33:53	-- nvmf/common.sh@161 -- # true
00:23:36.293   06:33:53	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:23:36.293  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:23:36.293   06:33:53	-- nvmf/common.sh@162 -- # true
00:23:36.293   06:33:53	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:23:36.293   06:33:53	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:23:36.293   06:33:53	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:23:36.293   06:33:53	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:23:36.293   06:33:53	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:23:36.293   06:33:53	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:23:36.293   06:33:53	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:23:36.293   06:33:53	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:23:36.293   06:33:53	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:23:36.293   06:33:53	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:23:36.293   06:33:53	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:23:36.293   06:33:53	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:23:36.293   06:33:53	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:23:36.293   06:33:53	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:23:36.551   06:33:53	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:23:36.551   06:33:53	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:23:36.551   06:33:53	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:23:36.552   06:33:53	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:23:36.552   06:33:53	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:23:36.552   06:33:53	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:23:36.552   06:33:53	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:23:36.552   06:33:53	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:23:36.552   06:33:53	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:23:36.552   06:33:53	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:23:36.552  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:23:36.552  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms
00:23:36.552  
00:23:36.552  --- 10.0.0.2 ping statistics ---
00:23:36.552  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:23:36.552  rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms
00:23:36.552   06:33:53	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:23:36.552  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:23:36.552  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms
00:23:36.552  
00:23:36.552  --- 10.0.0.3 ping statistics ---
00:23:36.552  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:23:36.552  rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms
00:23:36.552   06:33:53	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:23:36.552  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:23:36.552  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms
00:23:36.552  
00:23:36.552  --- 10.0.0.1 ping statistics ---
00:23:36.552  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:23:36.552  rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms
00:23:36.552   06:33:53	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:23:36.552   06:33:53	-- nvmf/common.sh@421 -- # return 0
00:23:36.552   06:33:53	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:23:36.552   06:33:53	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:23:36.552   06:33:53	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:23:36.552   06:33:53	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:23:36.552   06:33:53	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:23:36.552   06:33:53	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:23:36.552   06:33:53	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:23:36.552   06:33:53	-- host/multipath.sh@32 -- # nvmfappstart -m 0x3
00:23:36.552   06:33:53	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:23:36.552   06:33:53	-- common/autotest_common.sh@722 -- # xtrace_disable
00:23:36.552   06:33:53	-- common/autotest_common.sh@10 -- # set +x
00:23:36.552   06:33:53	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3
00:23:36.552   06:33:53	-- nvmf/common.sh@469 -- # nvmfpid=88222
00:23:36.552   06:33:53	-- nvmf/common.sh@470 -- # waitforlisten 88222
00:23:36.552   06:33:53	-- common/autotest_common.sh@829 -- # '[' -z 88222 ']'
00:23:36.552   06:33:53	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:23:36.552   06:33:53	-- common/autotest_common.sh@834 -- # local max_retries=100
00:23:36.552  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:23:36.552   06:33:53	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:23:36.552   06:33:53	-- common/autotest_common.sh@838 -- # xtrace_disable
00:23:36.552   06:33:53	-- common/autotest_common.sh@10 -- # set +x
00:23:36.552  [2024-12-16 06:33:53.425684] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:23:36.552  [2024-12-16 06:33:53.426176] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:23:36.811  [2024-12-16 06:33:53.559223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:23:36.811  [2024-12-16 06:33:53.632544] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:23:36.811  [2024-12-16 06:33:53.632675] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:23:36.811  [2024-12-16 06:33:53.632686] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:23:36.811  [2024-12-16 06:33:53.632693] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:23:36.811  [2024-12-16 06:33:53.632862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:23:36.811  [2024-12-16 06:33:53.633022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:23:37.746   06:33:54	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:23:37.746   06:33:54	-- common/autotest_common.sh@862 -- # return 0
00:23:37.746   06:33:54	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:23:37.746   06:33:54	-- common/autotest_common.sh@728 -- # xtrace_disable
00:23:37.746   06:33:54	-- common/autotest_common.sh@10 -- # set +x
00:23:37.746   06:33:54	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:23:37.746   06:33:54	-- host/multipath.sh@33 -- # nvmfapp_pid=88222
00:23:37.746   06:33:54	-- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:23:38.004  [2024-12-16 06:33:54.761012] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:23:38.004   06:33:54	-- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0
00:23:38.261  Malloc0
00:23:38.261   06:33:55	-- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2
00:23:38.261   06:33:55	-- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:23:38.519   06:33:55	-- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:23:38.777  [2024-12-16 06:33:55.674016] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:23:38.777   06:33:55	-- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421
00:23:39.036  [2024-12-16 06:33:55.874130] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 ***
00:23:39.036   06:33:55	-- host/multipath.sh@44 -- # bdevperf_pid=88320
00:23:39.036   06:33:55	-- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90
00:23:39.036   06:33:55	-- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:23:39.036   06:33:55	-- host/multipath.sh@47 -- # waitforlisten 88320 /var/tmp/bdevperf.sock
00:23:39.036   06:33:55	-- common/autotest_common.sh@829 -- # '[' -z 88320 ']'
00:23:39.036   06:33:55	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:23:39.036   06:33:55	-- common/autotest_common.sh@834 -- # local max_retries=100
00:23:39.036  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:23:39.036   06:33:55	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:23:39.036   06:33:55	-- common/autotest_common.sh@838 -- # xtrace_disable
00:23:39.036   06:33:55	-- common/autotest_common.sh@10 -- # set +x
00:23:39.989   06:33:56	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:23:39.989   06:33:56	-- common/autotest_common.sh@862 -- # return 0
00:23:39.989   06:33:56	-- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1
00:23:40.261   06:33:57	-- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10
00:23:40.519  Nvme0n1
00:23:40.778   06:33:57	-- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10
00:23:41.036  Nvme0n1
00:23:41.036   06:33:57	-- host/multipath.sh@78 -- # sleep 1
00:23:41.036   06:33:57	-- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests
00:23:41.972   06:33:58	-- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized
00:23:41.972   06:33:58	-- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized
00:23:42.230   06:33:59	-- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized
00:23:42.489   06:33:59	-- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421
00:23:42.489   06:33:59	-- host/multipath.sh@65 -- # dtrace_pid=88406
00:23:42.489   06:33:59	-- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88222 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt
00:23:42.489   06:33:59	-- host/multipath.sh@66 -- # sleep 6
00:23:49.052    06:34:05	-- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1
00:23:49.052    06:34:05	-- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid'
00:23:49.052   06:34:05	-- host/multipath.sh@67 -- # active_port=4421
00:23:49.052   06:34:05	-- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:23:49.052  Attaching 4 probes...
00:23:49.052  @path[10.0.0.2, 4421]: 19284
00:23:49.052  @path[10.0.0.2, 4421]: 19867
00:23:49.052  @path[10.0.0.2, 4421]: 19823
00:23:49.052  @path[10.0.0.2, 4421]: 19984
00:23:49.052  @path[10.0.0.2, 4421]: 20162
00:23:49.052    06:34:05	-- host/multipath.sh@69 -- # cut -d ']' -f1
00:23:49.052    06:34:05	-- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}'
00:23:49.052    06:34:05	-- host/multipath.sh@69 -- # sed -n 1p
00:23:49.052   06:34:05	-- host/multipath.sh@69 -- # port=4421
00:23:49.052   06:34:05	-- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]]
00:23:49.052   06:34:05	-- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]]
00:23:49.052   06:34:05	-- host/multipath.sh@72 -- # kill 88406
00:23:49.052   06:34:05	-- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:23:49.052   06:34:05	-- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible
00:23:49.052   06:34:05	-- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized
00:23:49.052   06:34:05	-- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible
00:23:49.311   06:34:06	-- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420
00:23:49.311   06:34:06	-- host/multipath.sh@65 -- # dtrace_pid=88539
00:23:49.311   06:34:06	-- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88222 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt
00:23:49.311   06:34:06	-- host/multipath.sh@66 -- # sleep 6
00:23:55.966    06:34:12	-- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1
00:23:55.966    06:34:12	-- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid'
00:23:55.966   06:34:12	-- host/multipath.sh@67 -- # active_port=4420
00:23:55.966   06:34:12	-- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:23:55.966  Attaching 4 probes...
00:23:55.966  @path[10.0.0.2, 4420]: 21037
00:23:55.966  @path[10.0.0.2, 4420]: 21463
00:23:55.966  @path[10.0.0.2, 4420]: 21566
00:23:55.966  @path[10.0.0.2, 4420]: 21045
00:23:55.966  @path[10.0.0.2, 4420]: 21305
00:23:55.966    06:34:12	-- host/multipath.sh@69 -- # cut -d ']' -f1
00:23:55.966    06:34:12	-- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}'
00:23:55.966    06:34:12	-- host/multipath.sh@69 -- # sed -n 1p
00:23:55.966   06:34:12	-- host/multipath.sh@69 -- # port=4420
00:23:55.966   06:34:12	-- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]]
00:23:55.966   06:34:12	-- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]]
00:23:55.966   06:34:12	-- host/multipath.sh@72 -- # kill 88539
00:23:55.966   06:34:12	-- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:23:55.966   06:34:12	-- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized
00:23:55.966   06:34:12	-- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible
00:23:55.966   06:34:12	-- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized
00:23:55.966   06:34:12	-- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421
00:23:55.966   06:34:12	-- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88222 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt
00:23:55.966   06:34:12	-- host/multipath.sh@65 -- # dtrace_pid=88675
00:23:55.966   06:34:12	-- host/multipath.sh@66 -- # sleep 6
00:24:02.529    06:34:18	-- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1
00:24:02.529    06:34:18	-- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid'
00:24:02.529   06:34:19	-- host/multipath.sh@67 -- # active_port=4421
00:24:02.529   06:34:19	-- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:24:02.529  Attaching 4 probes...
00:24:02.529  @path[10.0.0.2, 4421]: 14429
00:24:02.529  @path[10.0.0.2, 4421]: 19799
00:24:02.529  @path[10.0.0.2, 4421]: 19800
00:24:02.529  @path[10.0.0.2, 4421]: 20077
00:24:02.529  @path[10.0.0.2, 4421]: 19867
00:24:02.529    06:34:19	-- host/multipath.sh@69 -- # cut -d ']' -f1
00:24:02.529    06:34:19	-- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}'
00:24:02.529    06:34:19	-- host/multipath.sh@69 -- # sed -n 1p
00:24:02.529   06:34:19	-- host/multipath.sh@69 -- # port=4421
00:24:02.529   06:34:19	-- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]]
00:24:02.529   06:34:19	-- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]]
00:24:02.529   06:34:19	-- host/multipath.sh@72 -- # kill 88675
00:24:02.529   06:34:19	-- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:24:02.529   06:34:19	-- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible
00:24:02.529   06:34:19	-- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible
00:24:02.529   06:34:19	-- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible
00:24:02.788   06:34:19	-- host/multipath.sh@94 -- # confirm_io_on_port '' ''
00:24:02.788   06:34:19	-- host/multipath.sh@65 -- # dtrace_pid=88800
00:24:02.788   06:34:19	-- host/multipath.sh@66 -- # sleep 6
00:24:02.788   06:34:19	-- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88222 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt
00:24:09.352    06:34:25	-- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1
00:24:09.352    06:34:25	-- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid'
00:24:09.352   06:34:25	-- host/multipath.sh@67 -- # active_port=
00:24:09.352   06:34:25	-- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:24:09.352  Attaching 4 probes...
00:24:09.352  
00:24:09.352  
00:24:09.352  
00:24:09.352  
00:24:09.352  
00:24:09.352    06:34:25	-- host/multipath.sh@69 -- # cut -d ']' -f1
00:24:09.352    06:34:25	-- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}'
00:24:09.352    06:34:25	-- host/multipath.sh@69 -- # sed -n 1p
00:24:09.352   06:34:25	-- host/multipath.sh@69 -- # port=
00:24:09.352   06:34:25	-- host/multipath.sh@70 -- # [[ '' == '' ]]
00:24:09.352   06:34:25	-- host/multipath.sh@71 -- # [[ '' == '' ]]
00:24:09.352   06:34:25	-- host/multipath.sh@72 -- # kill 88800
00:24:09.352   06:34:25	-- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:24:09.352   06:34:25	-- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized
00:24:09.352   06:34:25	-- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized
00:24:09.352   06:34:26	-- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized
00:24:09.352   06:34:26	-- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421
00:24:09.353   06:34:26	-- host/multipath.sh@65 -- # dtrace_pid=88937
00:24:09.353   06:34:26	-- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88222 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt
00:24:09.353   06:34:26	-- host/multipath.sh@66 -- # sleep 6
00:24:15.918    06:34:32	-- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1
00:24:15.918    06:34:32	-- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid'
00:24:15.918   06:34:32	-- host/multipath.sh@67 -- # active_port=4421
00:24:15.918   06:34:32	-- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:24:15.918  Attaching 4 probes...
00:24:15.918  @path[10.0.0.2, 4421]: 20390
00:24:15.918  @path[10.0.0.2, 4421]: 20682
00:24:15.918  @path[10.0.0.2, 4421]: 20675
00:24:15.918  @path[10.0.0.2, 4421]: 20537
00:24:15.918  @path[10.0.0.2, 4421]: 20726
00:24:15.918    06:34:32	-- host/multipath.sh@69 -- # cut -d ']' -f1
00:24:15.918    06:34:32	-- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}'
00:24:15.918    06:34:32	-- host/multipath.sh@69 -- # sed -n 1p
00:24:15.918   06:34:32	-- host/multipath.sh@69 -- # port=4421
00:24:15.918   06:34:32	-- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]]
00:24:15.918   06:34:32	-- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]]
00:24:15.918   06:34:32	-- host/multipath.sh@72 -- # kill 88937
00:24:15.918   06:34:32	-- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:24:15.918   06:34:32	-- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421
00:24:15.918  [2024-12-16 06:34:32.828583] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828658] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828669] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828677] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828685] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828694] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828708] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828715] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828722] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828730] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828737] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828744] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828752] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828759] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828766] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828773] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828780] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828787] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828793] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828800] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828808] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828815] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828822] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828829] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828835] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828844] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828852] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828859] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828867] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.918  [2024-12-16 06:34:32.828874] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.828886] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.828894] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.828910] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.828917] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.828940] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.828948] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.828955] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.828962] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.828969] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.828977] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.828983] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.828990] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.828998] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829005] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829013] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829020] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829027] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829034] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829041] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829048] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829054] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829061] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829069] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829077] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829084] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829091] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829098] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829106] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829113] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829120] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829127] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829134] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829141] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829147] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829153] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829160] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829169] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829176] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829183] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829190] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829197] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829204] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829211] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829218] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829225] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829233] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829240] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829247] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829260] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829269] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829276] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829282] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829289] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829296] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829304] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919  [2024-12-16 06:34:32.829311] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa800 is same with the state(5) to be set
00:24:15.919   06:34:32	-- host/multipath.sh@101 -- # sleep 1
00:24:17.297   06:34:33	-- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420
00:24:17.297   06:34:33	-- host/multipath.sh@65 -- # dtrace_pid=89067
00:24:17.297   06:34:33	-- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88222 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt
00:24:17.297   06:34:33	-- host/multipath.sh@66 -- # sleep 6
00:24:23.863    06:34:39	-- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1
00:24:23.863    06:34:39	-- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid'
00:24:23.863   06:34:40	-- host/multipath.sh@67 -- # active_port=4420
00:24:23.863   06:34:40	-- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:24:23.863  Attaching 4 probes...
00:24:23.863  @path[10.0.0.2, 4420]: 19981
00:24:23.863  @path[10.0.0.2, 4420]: 20312
00:24:23.863  @path[10.0.0.2, 4420]: 20158
00:24:23.863  @path[10.0.0.2, 4420]: 20376
00:24:23.863  @path[10.0.0.2, 4420]: 20450
00:24:23.863    06:34:40	-- host/multipath.sh@69 -- # cut -d ']' -f1
00:24:23.863    06:34:40	-- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}'
00:24:23.863    06:34:40	-- host/multipath.sh@69 -- # sed -n 1p
00:24:23.863   06:34:40	-- host/multipath.sh@69 -- # port=4420
00:24:23.863   06:34:40	-- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]]
00:24:23.863   06:34:40	-- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]]
00:24:23.863   06:34:40	-- host/multipath.sh@72 -- # kill 89067
00:24:23.863   06:34:40	-- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:24:23.863   06:34:40	-- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421
00:24:23.863  [2024-12-16 06:34:40.350342] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 ***
00:24:23.863   06:34:40	-- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized
00:24:23.863   06:34:40	-- host/multipath.sh@111 -- # sleep 6
00:24:30.427   06:34:46	-- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421
00:24:30.427   06:34:46	-- host/multipath.sh@65 -- # dtrace_pid=89264
00:24:30.427   06:34:46	-- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88222 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt
00:24:30.427   06:34:46	-- host/multipath.sh@66 -- # sleep 6
00:24:37.003    06:34:52	-- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1
00:24:37.003    06:34:52	-- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid'
00:24:37.003   06:34:52	-- host/multipath.sh@67 -- # active_port=4421
00:24:37.003   06:34:52	-- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:24:37.003  Attaching 4 probes...
00:24:37.003  @path[10.0.0.2, 4421]: 20222
00:24:37.003  @path[10.0.0.2, 4421]: 20597
00:24:37.003  @path[10.0.0.2, 4421]: 20385
00:24:37.003  @path[10.0.0.2, 4421]: 20466
00:24:37.003  @path[10.0.0.2, 4421]: 20557
00:24:37.003    06:34:52	-- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}'
00:24:37.003    06:34:52	-- host/multipath.sh@69 -- # cut -d ']' -f1
00:24:37.003    06:34:52	-- host/multipath.sh@69 -- # sed -n 1p
00:24:37.003   06:34:52	-- host/multipath.sh@69 -- # port=4421
00:24:37.003   06:34:52	-- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]]
00:24:37.003   06:34:52	-- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]]
00:24:37.003   06:34:52	-- host/multipath.sh@72 -- # kill 89264
00:24:37.003   06:34:52	-- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:24:37.003   06:34:52	-- host/multipath.sh@114 -- # killprocess 88320
00:24:37.003   06:34:52	-- common/autotest_common.sh@936 -- # '[' -z 88320 ']'
00:24:37.003   06:34:52	-- common/autotest_common.sh@940 -- # kill -0 88320
00:24:37.003    06:34:52	-- common/autotest_common.sh@941 -- # uname
00:24:37.003   06:34:52	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:24:37.003    06:34:52	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88320
00:24:37.003  killing process with pid 88320
00:24:37.003   06:34:52	-- common/autotest_common.sh@942 -- # process_name=reactor_2
00:24:37.003   06:34:52	-- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']'
00:24:37.003   06:34:52	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 88320'
00:24:37.003   06:34:52	-- common/autotest_common.sh@955 -- # kill 88320
00:24:37.003   06:34:52	-- common/autotest_common.sh@960 -- # wait 88320
00:24:37.003  Connection closed with partial response:
00:24:37.003  
00:24:37.003  
00:24:37.003   06:34:53	-- host/multipath.sh@116 -- # wait 88320
00:24:37.003   06:34:53	-- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt
00:24:37.003  [2024-12-16 06:33:55.942865] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:24:37.003  [2024-12-16 06:33:55.942956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88320 ]
00:24:37.003  [2024-12-16 06:33:56.068730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:37.003  [2024-12-16 06:33:56.163490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:24:37.003  Running I/O for 90 seconds...
00:24:37.003  [2024-12-16 06:34:06.054844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.003  [2024-12-16 06:34:06.054988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:24:37.003  [2024-12-16 06:34:06.055035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.003  [2024-12-16 06:34:06.055054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0
00:24:37.003  [2024-12-16 06:34:06.055074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.003  [2024-12-16 06:34:06.055087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:24:37.003  [2024-12-16 06:34:06.055105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.003  [2024-12-16 06:34:06.055118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:24:37.003  [2024-12-16 06:34:06.055134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.003  [2024-12-16 06:34:06.055146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:24:37.003  [2024-12-16 06:34:06.055164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:88040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.003  [2024-12-16 06:34:06.055177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:24:37.003  [2024-12-16 06:34:06.055194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:88048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.003  [2024-12-16 06:34:06.055206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:24:37.003  [2024-12-16 06:34:06.055224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.003  [2024-12-16 06:34:06.055236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:24:37.003  [2024-12-16 06:34:06.055254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.003  [2024-12-16 06:34:06.055266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.055283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.004  [2024-12-16 06:34:06.055296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.055313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.004  [2024-12-16 06:34:06.055339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0029 p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.055359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.004  [2024-12-16 06:34:06.055371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002a p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.055389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.004  [2024-12-16 06:34:06.055401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.055418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.004  [2024-12-16 06:34:06.055430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.055446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.004  [2024-12-16 06:34:06.055458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.055475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.004  [2024-12-16 06:34:06.055515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.055554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.004  [2024-12-16 06:34:06.055569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.055589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.004  [2024-12-16 06:34:06.055603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.055624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.004  [2024-12-16 06:34:06.055638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0031 p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.055661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:88088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.004  [2024-12-16 06:34:06.055675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.055695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:88096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.004  [2024-12-16 06:34:06.055708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.056101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.004  [2024-12-16 06:34:06.056125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.056147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.004  [2024-12-16 06:34:06.056161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.056191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:88120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.004  [2024-12-16 06:34:06.056206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.056644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.004  [2024-12-16 06:34:06.056668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0037 p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.056690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:88136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.004  [2024-12-16 06:34:06.056705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.056725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.004  [2024-12-16 06:34:06.056739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.056759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.004  [2024-12-16 06:34:06.056773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.056792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.004  [2024-12-16 06:34:06.056806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.056825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.004  [2024-12-16 06:34:06.056840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.056874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.004  [2024-12-16 06:34:06.056906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.056941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:88184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.004  [2024-12-16 06:34:06.056954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.056972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.004  [2024-12-16 06:34:06.056985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.057004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:88200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.004  [2024-12-16 06:34:06.057017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.057035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.004  [2024-12-16 06:34:06.057062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.057088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:88216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.004  [2024-12-16 06:34:06.057102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.057120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:88224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.004  [2024-12-16 06:34:06.057132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.057149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.004  [2024-12-16 06:34:06.057161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.057179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.004  [2024-12-16 06:34:06.057191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0045 p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.057208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.004  [2024-12-16 06:34:06.057220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.057240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:88256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.004  [2024-12-16 06:34:06.057252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.057270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.004  [2024-12-16 06:34:06.057283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.057299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.004  [2024-12-16 06:34:06.057312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.057329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.004  [2024-12-16 06:34:06.057341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.057359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.004  [2024-12-16 06:34:06.057371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.057387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.004  [2024-12-16 06:34:06.057399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.057417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.004  [2024-12-16 06:34:06.057429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.057446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:88312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.004  [2024-12-16 06:34:06.057465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0
00:24:37.004  [2024-12-16 06:34:06.057483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:88320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.057534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.057554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.057568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.057605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.057622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.057642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.057656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.057677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.057691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.057710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.057724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.057743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.057757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.057777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.057791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.057810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.057824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.057844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.057858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.057919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.005  [2024-12-16 06:34:06.057950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.057967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.005  [2024-12-16 06:34:06.057986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.058005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.005  [2024-12-16 06:34:06.058017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.058035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.058047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.058064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:88368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.058076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.058094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.005  [2024-12-16 06:34:06.058107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.058126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:88384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.058139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.058156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.005  [2024-12-16 06:34:06.058169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.058186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:88400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.058198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.058215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:88408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.058227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.058245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.005  [2024-12-16 06:34:06.058257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.058870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.005  [2024-12-16 06:34:06.058931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.058962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.058977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.058995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.059008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.059036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.059049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.059067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.059079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.059096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.059108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.059125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.059137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.059154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.059166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.059183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.059195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.059212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.005  [2024-12-16 06:34:06.059224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.059241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.059260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.059278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.005  [2024-12-16 06:34:06.059290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.059307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:88456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.059319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.059336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.005  [2024-12-16 06:34:06.059348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.059365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:88472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.059377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.059401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:88480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.059415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.059432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.059444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.059461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:88496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.059473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:24:37.005  [2024-12-16 06:34:06.059525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:88504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.005  [2024-12-16 06:34:06.059543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:06.059563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:88512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.006  [2024-12-16 06:34:06.059576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:06.059594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.006  [2024-12-16 06:34:06.059607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:06.059624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.006  [2024-12-16 06:34:06.059637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:06.059655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:88536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.006  [2024-12-16 06:34:06.059668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:06.059685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:88544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.006  [2024-12-16 06:34:06.059698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:06.059716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.006  [2024-12-16 06:34:06.059729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:06.059748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.006  [2024-12-16 06:34:06.059760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:06.059778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.006  [2024-12-16 06:34:06.059797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:06.059815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.006  [2024-12-16 06:34:06.059839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:06.059873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.006  [2024-12-16 06:34:06.059885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:06.059903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.006  [2024-12-16 06:34:06.059915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:06.059932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.006  [2024-12-16 06:34:06.059945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:06.059970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.006  [2024-12-16 06:34:06.059983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:06.060000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.006  [2024-12-16 06:34:06.060013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:06.060031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.006  [2024-12-16 06:34:06.060051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:06.060068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.006  [2024-12-16 06:34:06.060081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:06.060098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.006  [2024-12-16 06:34:06.060111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0007 p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:06.060127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:88584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.006  [2024-12-16 06:34:06.060139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:06.060156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:88592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.006  [2024-12-16 06:34:06.060168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0009 p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:06.060185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.006  [2024-12-16 06:34:06.060197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:06.060214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.006  [2024-12-16 06:34:06.060232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:06.060250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:88616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.006  [2024-12-16 06:34:06.060263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:06.060280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.006  [2024-12-16 06:34:06.060292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:06.060309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:88632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.006  [2024-12-16 06:34:06.060326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:06.060344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.006  [2024-12-16 06:34:06.060356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:06.060374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:88648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.006  [2024-12-16 06:34:06.060386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:12.587997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.006  [2024-12-16 06:34:12.588067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:12.588202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.006  [2024-12-16 06:34:12.588229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:12.588253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.006  [2024-12-16 06:34:12.588267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:12.588285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.006  [2024-12-16 06:34:12.588300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:12.588318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.006  [2024-12-16 06:34:12.588332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:12.588351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.006  [2024-12-16 06:34:12.588365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:12.588383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.006  [2024-12-16 06:34:12.588415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:12.588437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.006  [2024-12-16 06:34:12.588451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:12.588468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.006  [2024-12-16 06:34:12.588482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:12.588535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.006  [2024-12-16 06:34:12.588550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:12.588570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.006  [2024-12-16 06:34:12.588585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:12.588992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.006  [2024-12-16 06:34:12.589017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:12.589042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.006  [2024-12-16 06:34:12.589058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:24:37.006  [2024-12-16 06:34:12.589077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.006  [2024-12-16 06:34:12.589091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0020 p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.589110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.007  [2024-12-16 06:34:12.589123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.589141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.589154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.589173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.589186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.589205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:41928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.589218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.589236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.589250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.589283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.007  [2024-12-16 06:34:12.589298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.589317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.007  [2024-12-16 06:34:12.589331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.589349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.589362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.589381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.589394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.589413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.589425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.589444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.589457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.589475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.589488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.589522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:41352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.589551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.589573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.589586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.589606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.589620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.589640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:41424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.589653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.589672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.589685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.589713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:41448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.589728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.589747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.589761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.589780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.589793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.589813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.589827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.589862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.589875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.589893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.589906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0037 p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.589924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.007  [2024-12-16 06:34:12.589937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.589956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.589969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.589987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.590000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.590018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.590031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.590049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:41992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.590062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.590080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.590093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.590111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.590129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.590148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.007  [2024-12-16 06:34:12.590161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.590181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.007  [2024-12-16 06:34:12.590194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0
00:24:37.007  [2024-12-16 06:34:12.590213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.007  [2024-12-16 06:34:12.590225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.590244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.590257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.590275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.590288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.590306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.008  [2024-12-16 06:34:12.590320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.590338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.590352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.590371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.590383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.590402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.590414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.590433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.590446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.590464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.590477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.590528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.590552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.590574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.590588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.590606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.590619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.590639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.008  [2024-12-16 06:34:12.590652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.590671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.590684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.590702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.590717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.590737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.008  [2024-12-16 06:34:12.590750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.590769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:42096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.008  [2024-12-16 06:34:12.590782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.590801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.590814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.590833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.590862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.590881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.590894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.590913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.590926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.590950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.590963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.590987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.591001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.591020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.008  [2024-12-16 06:34:12.591033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.591052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.591065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.591084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.591096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.591115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.591128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.591146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.591159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.591177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.591190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.591209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.591221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.591240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.008  [2024-12-16 06:34:12.591253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.591272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.008  [2024-12-16 06:34:12.591285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.591304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:42224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.008  [2024-12-16 06:34:12.591317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.591335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.591348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.591372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.591386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.591406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.591426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.591445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.591458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.591477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.591489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.591536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.591554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.591575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.591589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:24:37.008  [2024-12-16 06:34:12.591861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.008  [2024-12-16 06:34:12.591890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.591937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.009  [2024-12-16 06:34:12.591954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.591980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:42240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.009  [2024-12-16 06:34:12.591994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.592020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.009  [2024-12-16 06:34:12.592035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.592060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.009  [2024-12-16 06:34:12.592074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.592100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.009  [2024-12-16 06:34:12.592114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.592139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.009  [2024-12-16 06:34:12.592179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.592207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.009  [2024-12-16 06:34:12.592221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.592246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.009  [2024-12-16 06:34:12.592260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.592285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.009  [2024-12-16 06:34:12.592299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.592323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.009  [2024-12-16 06:34:12.592337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.592362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.009  [2024-12-16 06:34:12.592377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.592403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.009  [2024-12-16 06:34:12.592417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.592442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.009  [2024-12-16 06:34:12.592455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.592480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.009  [2024-12-16 06:34:12.592519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.592546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.009  [2024-12-16 06:34:12.592561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.592585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.009  [2024-12-16 06:34:12.592600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.592624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.009  [2024-12-16 06:34:12.592638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.592663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.009  [2024-12-16 06:34:12.592685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.592710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.009  [2024-12-16 06:34:12.592725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.592749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:42384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.009  [2024-12-16 06:34:12.592763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.592788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.009  [2024-12-16 06:34:12.592802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.592827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.009  [2024-12-16 06:34:12.592841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.592877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.009  [2024-12-16 06:34:12.592890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.592914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.009  [2024-12-16 06:34:12.592928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.592953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.009  [2024-12-16 06:34:12.592966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.592990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.009  [2024-12-16 06:34:12.593004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.593029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.009  [2024-12-16 06:34:12.593044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.593068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.009  [2024-12-16 06:34:12.593083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.593107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.009  [2024-12-16 06:34:12.593121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.593146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.009  [2024-12-16 06:34:12.593160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.593191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.009  [2024-12-16 06:34:12.593206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.593231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:42480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.009  [2024-12-16 06:34:12.593244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.593269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.009  [2024-12-16 06:34:12.593283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.593307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.009  [2024-12-16 06:34:12.593328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.593353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.009  [2024-12-16 06:34:12.593367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.593392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.009  [2024-12-16 06:34:12.593406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.593430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.009  [2024-12-16 06:34:12.593443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.593468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.009  [2024-12-16 06:34:12.593482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.593523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.009  [2024-12-16 06:34:12.593538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:24:37.009  [2024-12-16 06:34:12.593562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.010  [2024-12-16 06:34:12.593575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:12.593601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.010  [2024-12-16 06:34:12.593614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.611412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.010  [2024-12-16 06:34:19.611478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.611590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.010  [2024-12-16 06:34:19.611612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.611634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.010  [2024-12-16 06:34:19.611648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.611667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.010  [2024-12-16 06:34:19.611680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.611699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.010  [2024-12-16 06:34:19.611712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.611732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.010  [2024-12-16 06:34:19.611746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.611765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.010  [2024-12-16 06:34:19.611777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.611796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.010  [2024-12-16 06:34:19.611810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.611844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.010  [2024-12-16 06:34:19.611857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.611905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.010  [2024-12-16 06:34:19.611921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.611938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.010  [2024-12-16 06:34:19.611961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.611978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.010  [2024-12-16 06:34:19.611990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.612006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.010  [2024-12-16 06:34:19.612018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.612044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.010  [2024-12-16 06:34:19.612058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.612075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.010  [2024-12-16 06:34:19.612087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.612105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.010  [2024-12-16 06:34:19.612133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.612151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.010  [2024-12-16 06:34:19.612164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.612184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.010  [2024-12-16 06:34:19.612198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.612216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.010  [2024-12-16 06:34:19.612231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.612250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.010  [2024-12-16 06:34:19.612264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.612284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.010  [2024-12-16 06:34:19.612297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.612575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.010  [2024-12-16 06:34:19.612617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.612646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.010  [2024-12-16 06:34:19.612662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.612682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.010  [2024-12-16 06:34:19.612696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.612716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.010  [2024-12-16 06:34:19.612730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.612751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.010  [2024-12-16 06:34:19.612775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.612797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.010  [2024-12-16 06:34:19.612811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.612832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.010  [2024-12-16 06:34:19.612846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.612896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.010  [2024-12-16 06:34:19.612908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.612928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.010  [2024-12-16 06:34:19.612940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.612960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.010  [2024-12-16 06:34:19.612973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.612991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.010  [2024-12-16 06:34:19.613005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.613025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.010  [2024-12-16 06:34:19.613038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.613058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.010  [2024-12-16 06:34:19.613071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.613091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.010  [2024-12-16 06:34:19.613104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.613124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.010  [2024-12-16 06:34:19.613138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.613157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.010  [2024-12-16 06:34:19.613170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.613190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.010  [2024-12-16 06:34:19.613212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:24:37.010  [2024-12-16 06:34:19.613233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.613246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.613266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.613279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.613298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.613312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.613331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.613345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.613365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.613378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.613397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.613410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.613429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.613442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.613461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.613474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.613509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.613523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.613560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.613576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.613596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.613610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.613632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.011  [2024-12-16 06:34:19.613646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.613692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.011  [2024-12-16 06:34:19.613708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.613730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.613746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.613873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.011  [2024-12-16 06:34:19.613911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.613936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.011  [2024-12-16 06:34:19.613951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.613973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.011  [2024-12-16 06:34:19.613987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.614008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.614021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.614043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.614056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.614077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.011  [2024-12-16 06:34:19.614090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0012 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.614112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.614125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.614147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.614160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.614182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.011  [2024-12-16 06:34:19.614195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.614216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.011  [2024-12-16 06:34:19.614229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0016 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.614259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.614274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.614295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.011  [2024-12-16 06:34:19.614308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.614329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.614342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.614365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.614379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.614401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.614414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.614435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.011  [2024-12-16 06:34:19.614448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.614469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.614482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.614588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.614608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.614631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.011  [2024-12-16 06:34:19.614645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.614668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.011  [2024-12-16 06:34:19.614682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.614704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.614718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.614740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.011  [2024-12-16 06:34:19.614754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.614777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.614800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.614823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.614852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.614873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.614887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.614924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.011  [2024-12-16 06:34:19.614937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:24:37.011  [2024-12-16 06:34:19.614958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.012  [2024-12-16 06:34:19.614971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.614993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.012  [2024-12-16 06:34:19.615006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.615028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.012  [2024-12-16 06:34:19.615049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0029 p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.615072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.012  [2024-12-16 06:34:19.615086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.615107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.012  [2024-12-16 06:34:19.615120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.615141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.012  [2024-12-16 06:34:19.615154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.615175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.012  [2024-12-16 06:34:19.615188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.615209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.012  [2024-12-16 06:34:19.615222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.615243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.012  [2024-12-16 06:34:19.615262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.615284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.012  [2024-12-16 06:34:19.615297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.615319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.012  [2024-12-16 06:34:19.615331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.615352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.012  [2024-12-16 06:34:19.615365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.615386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.012  [2024-12-16 06:34:19.615399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.615420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.012  [2024-12-16 06:34:19.615433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.615453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.012  [2024-12-16 06:34:19.615466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.615487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.012  [2024-12-16 06:34:19.615517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.615548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.012  [2024-12-16 06:34:19.615565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.615586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.012  [2024-12-16 06:34:19.615601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.615623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.012  [2024-12-16 06:34:19.615643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.615768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.012  [2024-12-16 06:34:19.615789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003a p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.615818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.012  [2024-12-16 06:34:19.615833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.615883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.012  [2024-12-16 06:34:19.615897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.615922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.012  [2024-12-16 06:34:19.615935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.615959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.012  [2024-12-16 06:34:19.615972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.615995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.012  [2024-12-16 06:34:19.616008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.616032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.012  [2024-12-16 06:34:19.616045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.616069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.012  [2024-12-16 06:34:19.616083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.616106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.012  [2024-12-16 06:34:19.616120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.616143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.012  [2024-12-16 06:34:19.616157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.616180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.012  [2024-12-16 06:34:19.616193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.616217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.012  [2024-12-16 06:34:19.616230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.616254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.012  [2024-12-16 06:34:19.616268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0046 p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.616292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.012  [2024-12-16 06:34:19.616305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.616335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.012  [2024-12-16 06:34:19.616356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.616380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.012  [2024-12-16 06:34:19.616394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0
00:24:37.012  [2024-12-16 06:34:19.616419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:19.616433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:19.616457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.013  [2024-12-16 06:34:19.616471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:19.616511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:19.616539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:19.616567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:19.616581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:19.616605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.013  [2024-12-16 06:34:19.616619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:19.616644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.013  [2024-12-16 06:34:19.616657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:19.616682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:19.616695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:19.616720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.013  [2024-12-16 06:34:19.616733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:19.616758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.013  [2024-12-16 06:34:19.616772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:19.616796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:19.616809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:19.616834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:19.616870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:19.616895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.013  [2024-12-16 06:34:19.616909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:19.616933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:19.616946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0056 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:19.616971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:19.616984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:19.617009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.013  [2024-12-16 06:34:19.617029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:32.829841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:47128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:32.829920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:32.829948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:32.829962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:32.829975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:32.829987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:32.830000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:32.830011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:32.830023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:47176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:32.830034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:32.830047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:47184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:32.830057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:32.830070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:32.830081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:32.830093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:32.830141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:32.830159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:32.830171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:32.830183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:32.830193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:32.830205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:32.830216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:32.830228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:32.830238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:32.830251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:32.830261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:32.830274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:32.830285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:32.830297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:32.830307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:32.830319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:32.830330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:32.830343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:32.830356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:32.830368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:32.830379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:32.830394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:32.830405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:32.830417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:32.830428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:32.830440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:32.830464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:32.830477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:32.830509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:32.830536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:32.830547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:32.830560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:32.830572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:32.830584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:32.830595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.013  [2024-12-16 06:34:32.830608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.013  [2024-12-16 06:34:32.830619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.830631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:47248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.014  [2024-12-16 06:34:32.830642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.830655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:47256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.014  [2024-12-16 06:34:32.830666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.830677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.014  [2024-12-16 06:34:32.830689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.830702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.014  [2024-12-16 06:34:32.830713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.830725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.014  [2024-12-16 06:34:32.830737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.830749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.014  [2024-12-16 06:34:32.830760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.830774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.014  [2024-12-16 06:34:32.830785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.830805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.014  [2024-12-16 06:34:32.830817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.830830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:47328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.014  [2024-12-16 06:34:32.830851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.830863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.014  [2024-12-16 06:34:32.830874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.830887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.014  [2024-12-16 06:34:32.830898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.830915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.014  [2024-12-16 06:34:32.830926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.830938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.014  [2024-12-16 06:34:32.830949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.830962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.014  [2024-12-16 06:34:32.830973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.830985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.014  [2024-12-16 06:34:32.830996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.831008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.014  [2024-12-16 06:34:32.831020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.831032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.014  [2024-12-16 06:34:32.831043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.831055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.014  [2024-12-16 06:34:32.831066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.831078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.014  [2024-12-16 06:34:32.831089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.831102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.014  [2024-12-16 06:34:32.831118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.831130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.014  [2024-12-16 06:34:32.831142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.831163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:47432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.014  [2024-12-16 06:34:32.831175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.831187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.014  [2024-12-16 06:34:32.831199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.831211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:47448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.014  [2024-12-16 06:34:32.831222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.831234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:47456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.014  [2024-12-16 06:34:32.831245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.831258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.014  [2024-12-16 06:34:32.831269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.831282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.014  [2024-12-16 06:34:32.831293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.831306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.014  [2024-12-16 06:34:32.831318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.831330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.014  [2024-12-16 06:34:32.831341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.831354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.014  [2024-12-16 06:34:32.831365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.831377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:47504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.014  [2024-12-16 06:34:32.831388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.831400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.014  [2024-12-16 06:34:32.831412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.831429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.014  [2024-12-16 06:34:32.831441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.831453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:47528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.014  [2024-12-16 06:34:32.831465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.831477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.014  [2024-12-16 06:34:32.831507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.831533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.014  [2024-12-16 06:34:32.831544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.831556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:47552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.014  [2024-12-16 06:34:32.831568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.831587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.014  [2024-12-16 06:34:32.831600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.831613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.014  [2024-12-16 06:34:32.831624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.831636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.014  [2024-12-16 06:34:32.831648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.014  [2024-12-16 06:34:32.831660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.014  [2024-12-16 06:34:32.831672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.831684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.015  [2024-12-16 06:34:32.831695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.831708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.015  [2024-12-16 06:34:32.831719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.831731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.015  [2024-12-16 06:34:32.831742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.831755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.015  [2024-12-16 06:34:32.831766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.831785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.015  [2024-12-16 06:34:32.831797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.831809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:46920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.015  [2024-12-16 06:34:32.831820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.831833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:46928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.015  [2024-12-16 06:34:32.831844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.831856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.015  [2024-12-16 06:34:32.831878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.831890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.015  [2024-12-16 06:34:32.831901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.831914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.015  [2024-12-16 06:34:32.831925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.831938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.015  [2024-12-16 06:34:32.831949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.831961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.015  [2024-12-16 06:34:32.831972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.831990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.015  [2024-12-16 06:34:32.832003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:47088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.015  [2024-12-16 06:34:32.832026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.015  [2024-12-16 06:34:32.832050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.015  [2024-12-16 06:34:32.832074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.015  [2024-12-16 06:34:32.832102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.015  [2024-12-16 06:34:32.832126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.015  [2024-12-16 06:34:32.832150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:47616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.015  [2024-12-16 06:34:32.832174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:47624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.015  [2024-12-16 06:34:32.832197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.015  [2024-12-16 06:34:32.832222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.015  [2024-12-16 06:34:32.832246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.015  [2024-12-16 06:34:32.832270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.015  [2024-12-16 06:34:32.832293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.015  [2024-12-16 06:34:32.832316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.015  [2024-12-16 06:34:32.832340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.015  [2024-12-16 06:34:32.832363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.015  [2024-12-16 06:34:32.832389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.015  [2024-12-16 06:34:32.832422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.015  [2024-12-16 06:34:32.832446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.015  [2024-12-16 06:34:32.832469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.015  [2024-12-16 06:34:32.832507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.015  [2024-12-16 06:34:32.832532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.015  [2024-12-16 06:34:32.832555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:47744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.015  [2024-12-16 06:34:32.832578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.015  [2024-12-16 06:34:32.832601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.015  [2024-12-16 06:34:32.832625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.015  [2024-12-16 06:34:32.832648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.015  [2024-12-16 06:34:32.832670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.015  [2024-12-16 06:34:32.832683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:47784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.016  [2024-12-16 06:34:32.832694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.016  [2024-12-16 06:34:32.832707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:47792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.016  [2024-12-16 06:34:32.832724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.016  [2024-12-16 06:34:32.832738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.016  [2024-12-16 06:34:32.832749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.016  [2024-12-16 06:34:32.832762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.016  [2024-12-16 06:34:32.832783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.016  [2024-12-16 06:34:32.832795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.016  [2024-12-16 06:34:32.832805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.016  [2024-12-16 06:34:32.832818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.016  [2024-12-16 06:34:32.832829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.016  [2024-12-16 06:34:32.832841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:47832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.016  [2024-12-16 06:34:32.832852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.016  [2024-12-16 06:34:32.832864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.016  [2024-12-16 06:34:32.832875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.016  [2024-12-16 06:34:32.832888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.016  [2024-12-16 06:34:32.832899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.016  [2024-12-16 06:34:32.832922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.016  [2024-12-16 06:34:32.832933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.016  [2024-12-16 06:34:32.832945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:47864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.016  [2024-12-16 06:34:32.832956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.016  [2024-12-16 06:34:32.832969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.016  [2024-12-16 06:34:32.832980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.016  [2024-12-16 06:34:32.832992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:37.016  [2024-12-16 06:34:32.833003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.016  [2024-12-16 06:34:32.833015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.016  [2024-12-16 06:34:32.833026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.016  [2024-12-16 06:34:32.833044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.016  [2024-12-16 06:34:32.833056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.016  [2024-12-16 06:34:32.833075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.016  [2024-12-16 06:34:32.833086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.016  [2024-12-16 06:34:32.833099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.016  [2024-12-16 06:34:32.833110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.016  [2024-12-16 06:34:32.833122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:47232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.016  [2024-12-16 06:34:32.833133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.016  [2024-12-16 06:34:32.833145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:47240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.016  [2024-12-16 06:34:32.833156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.016  [2024-12-16 06:34:32.833168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:47264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:37.016  [2024-12-16 06:34:32.833179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.016  [2024-12-16 06:34:32.833191] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a5b0 is same with the state(5) to be set
00:24:37.016  [2024-12-16 06:34:32.833206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:24:37.016  [2024-12-16 06:34:32.833215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:24:37.016  [2024-12-16 06:34:32.833224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47272 len:8 PRP1 0x0 PRP2 0x0
00:24:37.016  [2024-12-16 06:34:32.833235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:37.016  [2024-12-16 06:34:32.833299] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe7a5b0 was disconnected and freed. reset controller.
00:24:37.016  [2024-12-16 06:34:32.834457] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller
00:24:37.016  [2024-12-16 06:34:32.834612] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101e790 (9): Bad file descriptor
00:24:37.016  [2024-12-16 06:34:32.834754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:24:37.016  [2024-12-16 06:34:32.834815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:24:37.016  [2024-12-16 06:34:32.834835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101e790 with addr=10.0.0.2, port=4421
00:24:37.016  [2024-12-16 06:34:32.834849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101e790 is same with the state(5) to be set
00:24:37.016  [2024-12-16 06:34:32.834872] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101e790 (9): Bad file descriptor
00:24:37.016  [2024-12-16 06:34:32.834892] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state
00:24:37.016  [2024-12-16 06:34:32.834904] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed
00:24:37.016  [2024-12-16 06:34:32.834924] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state.
00:24:37.016  [2024-12-16 06:34:32.834987] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:24:37.016  [2024-12-16 06:34:32.835008] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller
00:24:37.016  [2024-12-16 06:34:42.884665] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:24:37.016  Received shutdown signal, test time was about 55.045270 seconds
00:24:37.016  
00:24:37.016                                                                                                  Latency(us)
00:24:37.016  
[2024-12-16T06:34:53.992Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:24:37.016  
[2024-12-16T06:34:53.992Z]  Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:24:37.016  	 Verification LBA range: start 0x0 length 0x4000
00:24:37.016  	 Nvme0n1             :      55.04   11626.51      45.42       0.00     0.00   10993.19     428.22 7015926.69
00:24:37.016  
[2024-12-16T06:34:53.992Z]  ===================================================================================================================
00:24:37.016  
[2024-12-16T06:34:53.992Z]  Total                       :              11626.51      45.42       0.00     0.00   10993.19     428.22 7015926.69
00:24:37.016   06:34:53	-- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:24:37.016   06:34:53	-- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT
00:24:37.016   06:34:53	-- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt
00:24:37.016   06:34:53	-- host/multipath.sh@125 -- # nvmftestfini
00:24:37.016   06:34:53	-- nvmf/common.sh@476 -- # nvmfcleanup
00:24:37.016   06:34:53	-- nvmf/common.sh@116 -- # sync
00:24:37.016   06:34:53	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:24:37.016   06:34:53	-- nvmf/common.sh@119 -- # set +e
00:24:37.016   06:34:53	-- nvmf/common.sh@120 -- # for i in {1..20}
00:24:37.016   06:34:53	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:24:37.016  rmmod nvme_tcp
00:24:37.016  rmmod nvme_fabrics
00:24:37.016  rmmod nvme_keyring
00:24:37.016   06:34:53	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:24:37.016   06:34:53	-- nvmf/common.sh@123 -- # set -e
00:24:37.016   06:34:53	-- nvmf/common.sh@124 -- # return 0
00:24:37.016   06:34:53	-- nvmf/common.sh@477 -- # '[' -n 88222 ']'
00:24:37.016   06:34:53	-- nvmf/common.sh@478 -- # killprocess 88222
00:24:37.016   06:34:53	-- common/autotest_common.sh@936 -- # '[' -z 88222 ']'
00:24:37.016   06:34:53	-- common/autotest_common.sh@940 -- # kill -0 88222
00:24:37.016    06:34:53	-- common/autotest_common.sh@941 -- # uname
00:24:37.016   06:34:53	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:24:37.016    06:34:53	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88222
00:24:37.016  killing process with pid 88222
00:24:37.016   06:34:53	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:24:37.017   06:34:53	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:24:37.017   06:34:53	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 88222'
00:24:37.017   06:34:53	-- common/autotest_common.sh@955 -- # kill 88222
00:24:37.017   06:34:53	-- common/autotest_common.sh@960 -- # wait 88222
00:24:37.017   06:34:53	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:24:37.017   06:34:53	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:24:37.017   06:34:53	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:24:37.017   06:34:53	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:24:37.017   06:34:53	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:24:37.017   06:34:53	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:24:37.017   06:34:53	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:24:37.017    06:34:53	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:24:37.017   06:34:53	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:24:37.017  ************************************
00:24:37.017  END TEST nvmf_multipath
00:24:37.017  ************************************
00:24:37.017  
00:24:37.017  real	1m1.103s
00:24:37.017  user	2m51.337s
00:24:37.017  sys	0m14.245s
00:24:37.017   06:34:53	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:24:37.017   06:34:53	-- common/autotest_common.sh@10 -- # set +x
00:24:37.017   06:34:53	-- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp
00:24:37.017   06:34:53	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:24:37.017   06:34:53	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:24:37.017   06:34:53	-- common/autotest_common.sh@10 -- # set +x
00:24:37.276  ************************************
00:24:37.276  START TEST nvmf_timeout
00:24:37.276  ************************************
00:24:37.276   06:34:53	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp
00:24:37.276  * Looking for test storage...
00:24:37.276  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:24:37.276    06:34:54	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:24:37.276     06:34:54	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:24:37.276     06:34:54	-- common/autotest_common.sh@1690 -- # lcov --version
00:24:37.276    06:34:54	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:24:37.276    06:34:54	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:24:37.276    06:34:54	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:24:37.276    06:34:54	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:24:37.276    06:34:54	-- scripts/common.sh@335 -- # IFS=.-:
00:24:37.276    06:34:54	-- scripts/common.sh@335 -- # read -ra ver1
00:24:37.276    06:34:54	-- scripts/common.sh@336 -- # IFS=.-:
00:24:37.276    06:34:54	-- scripts/common.sh@336 -- # read -ra ver2
00:24:37.276    06:34:54	-- scripts/common.sh@337 -- # local 'op=<'
00:24:37.276    06:34:54	-- scripts/common.sh@339 -- # ver1_l=2
00:24:37.276    06:34:54	-- scripts/common.sh@340 -- # ver2_l=1
00:24:37.276    06:34:54	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:24:37.276    06:34:54	-- scripts/common.sh@343 -- # case "$op" in
00:24:37.276    06:34:54	-- scripts/common.sh@344 -- # : 1
00:24:37.276    06:34:54	-- scripts/common.sh@363 -- # (( v = 0 ))
00:24:37.276    06:34:54	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:24:37.276     06:34:54	-- scripts/common.sh@364 -- # decimal 1
00:24:37.276     06:34:54	-- scripts/common.sh@352 -- # local d=1
00:24:37.276     06:34:54	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:24:37.276     06:34:54	-- scripts/common.sh@354 -- # echo 1
00:24:37.276    06:34:54	-- scripts/common.sh@364 -- # ver1[v]=1
00:24:37.276     06:34:54	-- scripts/common.sh@365 -- # decimal 2
00:24:37.276     06:34:54	-- scripts/common.sh@352 -- # local d=2
00:24:37.276     06:34:54	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:24:37.276     06:34:54	-- scripts/common.sh@354 -- # echo 2
00:24:37.276    06:34:54	-- scripts/common.sh@365 -- # ver2[v]=2
00:24:37.276    06:34:54	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:24:37.276    06:34:54	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:24:37.276    06:34:54	-- scripts/common.sh@367 -- # return 0
00:24:37.276    06:34:54	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:24:37.276    06:34:54	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:24:37.276  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:37.276  		--rc genhtml_branch_coverage=1
00:24:37.276  		--rc genhtml_function_coverage=1
00:24:37.276  		--rc genhtml_legend=1
00:24:37.276  		--rc geninfo_all_blocks=1
00:24:37.276  		--rc geninfo_unexecuted_blocks=1
00:24:37.276  		
00:24:37.276  		'
00:24:37.276    06:34:54	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:24:37.276  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:37.276  		--rc genhtml_branch_coverage=1
00:24:37.276  		--rc genhtml_function_coverage=1
00:24:37.276  		--rc genhtml_legend=1
00:24:37.276  		--rc geninfo_all_blocks=1
00:24:37.276  		--rc geninfo_unexecuted_blocks=1
00:24:37.276  		
00:24:37.276  		'
00:24:37.276    06:34:54	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:24:37.276  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:37.276  		--rc genhtml_branch_coverage=1
00:24:37.276  		--rc genhtml_function_coverage=1
00:24:37.276  		--rc genhtml_legend=1
00:24:37.276  		--rc geninfo_all_blocks=1
00:24:37.276  		--rc geninfo_unexecuted_blocks=1
00:24:37.276  		
00:24:37.276  		'
00:24:37.276    06:34:54	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:24:37.276  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:37.276  		--rc genhtml_branch_coverage=1
00:24:37.276  		--rc genhtml_function_coverage=1
00:24:37.276  		--rc genhtml_legend=1
00:24:37.276  		--rc geninfo_all_blocks=1
00:24:37.276  		--rc geninfo_unexecuted_blocks=1
00:24:37.276  		
00:24:37.276  		'
00:24:37.276   06:34:54	-- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:24:37.276     06:34:54	-- nvmf/common.sh@7 -- # uname -s
00:24:37.276    06:34:54	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:24:37.276    06:34:54	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:24:37.276    06:34:54	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:24:37.276    06:34:54	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:24:37.276    06:34:54	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:24:37.276    06:34:54	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:24:37.276    06:34:54	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:24:37.276    06:34:54	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:24:37.276    06:34:54	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:24:37.277     06:34:54	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:24:37.277    06:34:54	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:24:37.277    06:34:54	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:24:37.277    06:34:54	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:24:37.277    06:34:54	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:24:37.277    06:34:54	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:24:37.277    06:34:54	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:24:37.277     06:34:54	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:24:37.277     06:34:54	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:24:37.277     06:34:54	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:24:37.277      06:34:54	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:37.277      06:34:54	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:37.277      06:34:54	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:37.277      06:34:54	-- paths/export.sh@5 -- # export PATH
00:24:37.277      06:34:54	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:37.277    06:34:54	-- nvmf/common.sh@46 -- # : 0
00:24:37.277    06:34:54	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:24:37.277    06:34:54	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:24:37.277    06:34:54	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:24:37.277    06:34:54	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:24:37.277    06:34:54	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:24:37.277    06:34:54	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:24:37.277    06:34:54	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:24:37.277    06:34:54	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:24:37.277   06:34:54	-- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64
00:24:37.277   06:34:54	-- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:24:37.277   06:34:54	-- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:24:37.277   06:34:54	-- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh
00:24:37.277   06:34:54	-- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:24:37.277   06:34:54	-- host/timeout.sh@19 -- # nvmftestinit
00:24:37.277   06:34:54	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:24:37.277   06:34:54	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:24:37.277   06:34:54	-- nvmf/common.sh@436 -- # prepare_net_devs
00:24:37.277   06:34:54	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:24:37.277   06:34:54	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:24:37.277   06:34:54	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:24:37.277   06:34:54	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:24:37.277    06:34:54	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:24:37.277   06:34:54	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:24:37.277   06:34:54	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:24:37.277   06:34:54	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:24:37.277   06:34:54	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:24:37.277   06:34:54	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:24:37.277   06:34:54	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:24:37.277   06:34:54	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:24:37.277   06:34:54	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:24:37.277   06:34:54	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:24:37.277   06:34:54	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:24:37.277   06:34:54	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:24:37.277   06:34:54	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:24:37.277   06:34:54	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:24:37.277   06:34:54	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:24:37.277   06:34:54	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:24:37.277   06:34:54	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:24:37.277   06:34:54	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:24:37.277   06:34:54	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:24:37.277   06:34:54	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:24:37.277   06:34:54	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:24:37.277  Cannot find device "nvmf_tgt_br"
00:24:37.277   06:34:54	-- nvmf/common.sh@154 -- # true
00:24:37.277   06:34:54	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:24:37.277  Cannot find device "nvmf_tgt_br2"
00:24:37.277   06:34:54	-- nvmf/common.sh@155 -- # true
00:24:37.277   06:34:54	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:24:37.277   06:34:54	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:24:37.277  Cannot find device "nvmf_tgt_br"
00:24:37.536   06:34:54	-- nvmf/common.sh@157 -- # true
00:24:37.536   06:34:54	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:24:37.536  Cannot find device "nvmf_tgt_br2"
00:24:37.536   06:34:54	-- nvmf/common.sh@158 -- # true
00:24:37.536   06:34:54	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:24:37.536   06:34:54	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:24:37.536   06:34:54	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:24:37.536  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:24:37.536   06:34:54	-- nvmf/common.sh@161 -- # true
00:24:37.536   06:34:54	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:24:37.536  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:24:37.536   06:34:54	-- nvmf/common.sh@162 -- # true
00:24:37.536   06:34:54	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:24:37.536   06:34:54	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:24:37.536   06:34:54	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:24:37.536   06:34:54	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:24:37.536   06:34:54	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:24:37.536   06:34:54	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:24:37.536   06:34:54	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:24:37.536   06:34:54	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:24:37.536   06:34:54	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:24:37.536   06:34:54	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:24:37.536   06:34:54	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:24:37.536   06:34:54	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:24:37.536   06:34:54	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:24:37.536   06:34:54	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:24:37.536   06:34:54	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:24:37.536   06:34:54	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:24:37.536   06:34:54	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:24:37.536   06:34:54	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:24:37.536   06:34:54	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:24:37.536   06:34:54	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:24:37.536   06:34:54	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:24:37.536   06:34:54	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:24:37.536   06:34:54	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:24:37.536   06:34:54	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:24:37.536  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:24:37.536  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms
00:24:37.536  
00:24:37.536  --- 10.0.0.2 ping statistics ---
00:24:37.536  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:24:37.536  rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms
00:24:37.536   06:34:54	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:24:37.536  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:24:37.536  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms
00:24:37.536  
00:24:37.536  --- 10.0.0.3 ping statistics ---
00:24:37.536  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:24:37.536  rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms
00:24:37.536   06:34:54	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:24:37.536  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:24:37.536  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms
00:24:37.536  
00:24:37.536  --- 10.0.0.1 ping statistics ---
00:24:37.536  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:24:37.536  rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms
00:24:37.536   06:34:54	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:24:37.536   06:34:54	-- nvmf/common.sh@421 -- # return 0
00:24:37.536   06:34:54	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:24:37.536   06:34:54	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:24:37.536   06:34:54	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:24:37.536   06:34:54	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:24:37.536   06:34:54	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:24:37.536   06:34:54	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:24:37.536   06:34:54	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:24:37.536   06:34:54	-- host/timeout.sh@21 -- # nvmfappstart -m 0x3
00:24:37.536   06:34:54	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:24:37.536   06:34:54	-- common/autotest_common.sh@722 -- # xtrace_disable
00:24:37.536   06:34:54	-- common/autotest_common.sh@10 -- # set +x
00:24:37.536   06:34:54	-- nvmf/common.sh@469 -- # nvmfpid=89592
00:24:37.536   06:34:54	-- nvmf/common.sh@470 -- # waitforlisten 89592
00:24:37.536   06:34:54	-- common/autotest_common.sh@829 -- # '[' -z 89592 ']'
00:24:37.536   06:34:54	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3
00:24:37.536   06:34:54	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:24:37.536   06:34:54	-- common/autotest_common.sh@834 -- # local max_retries=100
00:24:37.536   06:34:54	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:24:37.536  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:24:37.536   06:34:54	-- common/autotest_common.sh@838 -- # xtrace_disable
00:24:37.536   06:34:54	-- common/autotest_common.sh@10 -- # set +x
00:24:37.796  [2024-12-16 06:34:54.569056] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:24:37.796  [2024-12-16 06:34:54.569695] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:24:37.796  [2024-12-16 06:34:54.703898] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:24:38.055  [2024-12-16 06:34:54.791285] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:24:38.055  [2024-12-16 06:34:54.791445] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:24:38.055  [2024-12-16 06:34:54.791460] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:24:38.055  [2024-12-16 06:34:54.791470] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:24:38.055  [2024-12-16 06:34:54.791603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:24:38.055  [2024-12-16 06:34:54.791615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:24:38.624   06:34:55	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:24:38.624   06:34:55	-- common/autotest_common.sh@862 -- # return 0
00:24:38.624   06:34:55	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:24:38.624   06:34:55	-- common/autotest_common.sh@728 -- # xtrace_disable
00:24:38.624   06:34:55	-- common/autotest_common.sh@10 -- # set +x
00:24:38.624   06:34:55	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:24:38.624   06:34:55	-- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:24:38.624   06:34:55	-- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:24:38.883  [2024-12-16 06:34:55.778583] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:24:38.883   06:34:55	-- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0
00:24:39.141  Malloc0
00:24:39.400   06:34:56	-- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:24:39.659   06:34:56	-- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:24:39.659   06:34:56	-- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:24:39.918  [2024-12-16 06:34:56.777350] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:24:39.918   06:34:56	-- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f
00:24:39.918   06:34:56	-- host/timeout.sh@32 -- # bdevperf_pid=89684
00:24:39.918   06:34:56	-- host/timeout.sh@34 -- # waitforlisten 89684 /var/tmp/bdevperf.sock
00:24:39.918   06:34:56	-- common/autotest_common.sh@829 -- # '[' -z 89684 ']'
00:24:39.918   06:34:56	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:24:39.918   06:34:56	-- common/autotest_common.sh@834 -- # local max_retries=100
00:24:39.918  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:24:39.918   06:34:56	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:24:39.918   06:34:56	-- common/autotest_common.sh@838 -- # xtrace_disable
00:24:39.918   06:34:56	-- common/autotest_common.sh@10 -- # set +x
00:24:39.918  [2024-12-16 06:34:56.837067] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:24:39.918  [2024-12-16 06:34:56.837154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89684 ]
00:24:40.177  [2024-12-16 06:34:56.971441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:40.177  [2024-12-16 06:34:57.077988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:24:41.114   06:34:57	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:24:41.114   06:34:57	-- common/autotest_common.sh@862 -- # return 0
00:24:41.114   06:34:57	-- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1
00:24:41.114   06:34:58	-- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2
00:24:41.681  NVMe0n1
00:24:41.681   06:34:58	-- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:24:41.681   06:34:58	-- host/timeout.sh@51 -- # rpc_pid=89726
00:24:41.681   06:34:58	-- host/timeout.sh@53 -- # sleep 1
00:24:41.681  Running I/O for 10 seconds...
00:24:42.617   06:34:59	-- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:24:42.879  [2024-12-16 06:34:59.620365] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620409] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620420] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620428] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620435] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620443] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620450] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620457] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620464] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620470] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620477] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620510] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620540] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620556] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620564] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620572] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620580] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620588] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620596] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620604] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620612] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620620] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620628] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620636] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620644] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620652] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620659] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620667] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.879  [2024-12-16 06:34:59.620675] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.880  [2024-12-16 06:34:59.620682] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.880  [2024-12-16 06:34:59.620690] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.880  [2024-12-16 06:34:59.620699] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.880  [2024-12-16 06:34:59.620707] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.880  [2024-12-16 06:34:59.620716] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.880  [2024-12-16 06:34:59.620725] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.880  [2024-12-16 06:34:59.620733] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.880  [2024-12-16 06:34:59.620742] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.880  [2024-12-16 06:34:59.620750] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.880  [2024-12-16 06:34:59.620759] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.880  [2024-12-16 06:34:59.620767] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.880  [2024-12-16 06:34:59.620775] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.880  [2024-12-16 06:34:59.620783] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.880  [2024-12-16 06:34:59.620791] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.880  [2024-12-16 06:34:59.620799] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.880  [2024-12-16 06:34:59.620807] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.880  [2024-12-16 06:34:59.620814] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.880  [2024-12-16 06:34:59.620822] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde4a40 is same with the state(5) to be set
00:24:42.880  [2024-12-16 06:34:59.621394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:130832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.880  [2024-12-16 06:34:59.621433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.621452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.880  [2024-12-16 06:34:59.621462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.621472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:130856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.880  [2024-12-16 06:34:59.621480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.621490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:130872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.880  [2024-12-16 06:34:59.621524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.621542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.880  [2024-12-16 06:34:59.621550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.621560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.880  [2024-12-16 06:34:59.621568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.621578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.880  [2024-12-16 06:34:59.621586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.621595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.880  [2024-12-16 06:34:59.621603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.621613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.880  [2024-12-16 06:34:59.621621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.621630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.880  [2024-12-16 06:34:59.621638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.621648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.880  [2024-12-16 06:34:59.621655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.621665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.880  [2024-12-16 06:34:59.621675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.621685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.880  [2024-12-16 06:34:59.621693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.621704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.880  [2024-12-16 06:34:59.621712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.621722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.880  [2024-12-16 06:34:59.621732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.621742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.880  [2024-12-16 06:34:59.621750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.621760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.880  [2024-12-16 06:34:59.621769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.621779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.880  [2024-12-16 06:34:59.621787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.621797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.880  [2024-12-16 06:34:59.621817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.621827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.880  [2024-12-16 06:34:59.621835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.621845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.880  [2024-12-16 06:34:59.621866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.621876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.880  [2024-12-16 06:34:59.621893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.621915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.880  [2024-12-16 06:34:59.621923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.621933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.880  [2024-12-16 06:34:59.621941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.621951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.880  [2024-12-16 06:34:59.621959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.621969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.880  [2024-12-16 06:34:59.621977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.621987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.880  [2024-12-16 06:34:59.621995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.622005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.880  [2024-12-16 06:34:59.622013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.622023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.880  [2024-12-16 06:34:59.622031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.622041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.880  [2024-12-16 06:34:59.622049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.622058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.880  [2024-12-16 06:34:59.622067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.880  [2024-12-16 06:34:59.622076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.880  [2024-12-16 06:34:59.622084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.622094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.622102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.622111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.622119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.622129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.622137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.622147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.622155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.622165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.622173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.622182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.622190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.622200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.622208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.622218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.622225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.622235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.622242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.622252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.622260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.622269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.622277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.622287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.881  [2024-12-16 06:34:59.622294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.622303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.881  [2024-12-16 06:34:59.622311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.622321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.881  [2024-12-16 06:34:59.622329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.622339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.622347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.622356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.881  [2024-12-16 06:34:59.622371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.622381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.622389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.622398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.622406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.622416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.622425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.622434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.881  [2024-12-16 06:34:59.622442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.622451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.622459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.622469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.622476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.622888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.622958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.623006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.623052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.623314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.623397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.623440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.623481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.623634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.881  [2024-12-16 06:34:59.623817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.623888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.881  [2024-12-16 06:34:59.624009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.624021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.624029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.624040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.624048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.624058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.881  [2024-12-16 06:34:59.624075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.624085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.624100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.624110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.881  [2024-12-16 06:34:59.624134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.624144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.624155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.624164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.881  [2024-12-16 06:34:59.624172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.624181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.624188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.624198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.624205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.624214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.624222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.624230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.881  [2024-12-16 06:34:59.624238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.881  [2024-12-16 06:34:59.624246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.881  [2024-12-16 06:34:59.624254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.882  [2024-12-16 06:34:59.624603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.882  [2024-12-16 06:34:59.624637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.882  [2024-12-16 06:34:59.624654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.882  [2024-12-16 06:34:59.624671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.882  [2024-12-16 06:34:59.624712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.882  [2024-12-16 06:34:59.624770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.882  [2024-12-16 06:34:59.624787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.882  [2024-12-16 06:34:59.624804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.882  [2024-12-16 06:34:59.624821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.882  [2024-12-16 06:34:59.624940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.624957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.882  [2024-12-16 06:34:59.624974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.624984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.882  [2024-12-16 06:34:59.624992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.625001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.625015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.882  [2024-12-16 06:34:59.625024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.882  [2024-12-16 06:34:59.625032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.883  [2024-12-16 06:34:59.625042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.883  [2024-12-16 06:34:59.625055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.883  [2024-12-16 06:34:59.625065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.883  [2024-12-16 06:34:59.625072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.883  [2024-12-16 06:34:59.625082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.883  [2024-12-16 06:34:59.625089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.883  [2024-12-16 06:34:59.625098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.883  [2024-12-16 06:34:59.625106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.883  [2024-12-16 06:34:59.625116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.883  [2024-12-16 06:34:59.625123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.883  [2024-12-16 06:34:59.625132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.883  [2024-12-16 06:34:59.625140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.883  [2024-12-16 06:34:59.625150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.883  [2024-12-16 06:34:59.625158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.883  [2024-12-16 06:34:59.625167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:42.883  [2024-12-16 06:34:59.625174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.883  [2024-12-16 06:34:59.625184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.883  [2024-12-16 06:34:59.625191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.883  [2024-12-16 06:34:59.625201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.883  [2024-12-16 06:34:59.625209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.883  [2024-12-16 06:34:59.625218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.883  [2024-12-16 06:34:59.625226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.883  [2024-12-16 06:34:59.625235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.883  [2024-12-16 06:34:59.625243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.883  [2024-12-16 06:34:59.625253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.883  [2024-12-16 06:34:59.625260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.883  [2024-12-16 06:34:59.625269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:42.883  [2024-12-16 06:34:59.625277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.883  [2024-12-16 06:34:59.625304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:24:42.883  [2024-12-16 06:34:59.625320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:24:42.883  [2024-12-16 06:34:59.625328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130912 len:8 PRP1 0x0 PRP2 0x0
00:24:42.883  [2024-12-16 06:34:59.625335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.883  [2024-12-16 06:34:59.625386] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22f0050 was disconnected and freed. reset controller.
00:24:42.883  [2024-12-16 06:34:59.625478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:24:42.883  [2024-12-16 06:34:59.625518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.883  [2024-12-16 06:34:59.625528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:24:42.883  [2024-12-16 06:34:59.625536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.883  [2024-12-16 06:34:59.625545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:24:42.883  [2024-12-16 06:34:59.625553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.883  [2024-12-16 06:34:59.625562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:24:42.883  [2024-12-16 06:34:59.625569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:42.883  [2024-12-16 06:34:59.625577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227adc0 is same with the state(5) to be set
00:24:42.883  [2024-12-16 06:34:59.625755] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller
00:24:42.883  [2024-12-16 06:34:59.625776] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227adc0 (9): Bad file descriptor
00:24:42.883  [2024-12-16 06:34:59.625860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:24:42.883  [2024-12-16 06:34:59.625913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:24:42.883  [2024-12-16 06:34:59.625928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227adc0 with addr=10.0.0.2, port=4420
00:24:42.883  [2024-12-16 06:34:59.625937] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227adc0 is same with the state(5) to be set
00:24:42.883  [2024-12-16 06:34:59.625953] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227adc0 (9): Bad file descriptor
00:24:42.883  [2024-12-16 06:34:59.625966] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state
00:24:42.883  [2024-12-16 06:34:59.625975] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed
00:24:42.883  [2024-12-16 06:34:59.625983] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state.
00:24:42.883  [2024-12-16 06:34:59.626000] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:24:42.883  [2024-12-16 06:34:59.626009] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller
00:24:42.883   06:34:59	-- host/timeout.sh@56 -- # sleep 2
00:24:44.788  [2024-12-16 06:35:01.639454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:24:44.788  [2024-12-16 06:35:01.639531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:24:44.788  [2024-12-16 06:35:01.639548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227adc0 with addr=10.0.0.2, port=4420
00:24:44.788  [2024-12-16 06:35:01.639558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227adc0 is same with the state(5) to be set
00:24:44.788  [2024-12-16 06:35:01.639575] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227adc0 (9): Bad file descriptor
00:24:44.788  [2024-12-16 06:35:01.639589] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state
00:24:44.788  [2024-12-16 06:35:01.639597] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed
00:24:44.788  [2024-12-16 06:35:01.639605] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state.
00:24:44.788  [2024-12-16 06:35:01.639625] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:24:44.788  [2024-12-16 06:35:01.639634] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller
00:24:44.788    06:35:01	-- host/timeout.sh@57 -- # get_controller
00:24:44.788    06:35:01	-- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:24:44.788    06:35:01	-- host/timeout.sh@41 -- # jq -r '.[].name'
00:24:45.047   06:35:01	-- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]]
00:24:45.047    06:35:01	-- host/timeout.sh@58 -- # get_bdev
00:24:45.047    06:35:01	-- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs
00:24:45.047    06:35:01	-- host/timeout.sh@37 -- # jq -r '.[].name'
00:24:45.305   06:35:02	-- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]]
00:24:45.305   06:35:02	-- host/timeout.sh@61 -- # sleep 5
00:24:46.683  [2024-12-16 06:35:03.639693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:24:46.683  [2024-12-16 06:35:03.639767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:24:46.683  [2024-12-16 06:35:03.639785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227adc0 with addr=10.0.0.2, port=4420
00:24:46.683  [2024-12-16 06:35:03.639795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227adc0 is same with the state(5) to be set
00:24:46.683  [2024-12-16 06:35:03.639812] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227adc0 (9): Bad file descriptor
00:24:46.683  [2024-12-16 06:35:03.639826] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state
00:24:46.683  [2024-12-16 06:35:03.639834] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed
00:24:46.683  [2024-12-16 06:35:03.639842] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state.
00:24:46.683  [2024-12-16 06:35:03.639859] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:24:46.683  [2024-12-16 06:35:03.639869] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller
00:24:49.216  [2024-12-16 06:35:05.639885] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state.
00:24:49.216  [2024-12-16 06:35:05.639917] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state
00:24:49.216  [2024-12-16 06:35:05.639927] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed
00:24:49.216  [2024-12-16 06:35:05.639944] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state
00:24:49.216  [2024-12-16 06:35:05.639961] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:24:49.783  
00:24:49.783                                                                                                  Latency(us)
00:24:49.783  
[2024-12-16T06:35:06.759Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:24:49.783  
[2024-12-16T06:35:06.759Z]  Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:24:49.783  	 Verification LBA range: start 0x0 length 0x4000
00:24:49.783  	 NVMe0n1             :       8.17    1998.11       7.81      15.67     0.00   63476.37    2398.02 7015926.69
00:24:49.783  
[2024-12-16T06:35:06.759Z]  ===================================================================================================================
00:24:49.783  
[2024-12-16T06:35:06.759Z]  Total                       :               1998.11       7.81      15.67     0.00   63476.37    2398.02 7015926.69
00:24:49.783  0
00:24:50.351    06:35:07	-- host/timeout.sh@62 -- # get_controller
00:24:50.351    06:35:07	-- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:24:50.351    06:35:07	-- host/timeout.sh@41 -- # jq -r '.[].name'
00:24:50.609   06:35:07	-- host/timeout.sh@62 -- # [[ '' == '' ]]
00:24:50.609    06:35:07	-- host/timeout.sh@63 -- # get_bdev
00:24:50.609    06:35:07	-- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs
00:24:50.609    06:35:07	-- host/timeout.sh@37 -- # jq -r '.[].name'
00:24:50.873   06:35:07	-- host/timeout.sh@63 -- # [[ '' == '' ]]
00:24:50.873   06:35:07	-- host/timeout.sh@65 -- # wait 89726
00:24:50.873   06:35:07	-- host/timeout.sh@67 -- # killprocess 89684
00:24:50.873   06:35:07	-- common/autotest_common.sh@936 -- # '[' -z 89684 ']'
00:24:50.873   06:35:07	-- common/autotest_common.sh@940 -- # kill -0 89684
00:24:50.873    06:35:07	-- common/autotest_common.sh@941 -- # uname
00:24:50.873   06:35:07	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:24:50.873    06:35:07	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89684
00:24:50.873  killing process with pid 89684
00:24:50.873  Received shutdown signal, test time was about 9.200684 seconds
00:24:50.873  
00:24:50.873                                                                                                  Latency(us)
00:24:50.873  
[2024-12-16T06:35:07.849Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:24:50.873  
[2024-12-16T06:35:07.849Z]  ===================================================================================================================
00:24:50.873  
[2024-12-16T06:35:07.849Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:24:50.873   06:35:07	-- common/autotest_common.sh@942 -- # process_name=reactor_2
00:24:50.873   06:35:07	-- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']'
00:24:50.873   06:35:07	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 89684'
00:24:50.873   06:35:07	-- common/autotest_common.sh@955 -- # kill 89684
00:24:50.873   06:35:07	-- common/autotest_common.sh@960 -- # wait 89684
00:24:51.159   06:35:07	-- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:24:51.450  [2024-12-16 06:35:08.146259] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:24:51.451  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:24:51.451   06:35:08	-- host/timeout.sh@74 -- # bdevperf_pid=89885
00:24:51.451   06:35:08	-- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f
00:24:51.451   06:35:08	-- host/timeout.sh@76 -- # waitforlisten 89885 /var/tmp/bdevperf.sock
00:24:51.451   06:35:08	-- common/autotest_common.sh@829 -- # '[' -z 89885 ']'
00:24:51.451   06:35:08	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:24:51.451   06:35:08	-- common/autotest_common.sh@834 -- # local max_retries=100
00:24:51.451   06:35:08	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:24:51.451   06:35:08	-- common/autotest_common.sh@838 -- # xtrace_disable
00:24:51.451   06:35:08	-- common/autotest_common.sh@10 -- # set +x
00:24:51.451  [2024-12-16 06:35:08.203733] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:24:51.451  [2024-12-16 06:35:08.203817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89885 ]
00:24:51.451  [2024-12-16 06:35:08.327367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:51.722  [2024-12-16 06:35:08.413250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:24:52.289   06:35:09	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:24:52.290   06:35:09	-- common/autotest_common.sh@862 -- # return 0
00:24:52.290   06:35:09	-- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1
00:24:52.548   06:35:09	-- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1
00:24:52.807  NVMe0n1
00:24:52.807   06:35:09	-- host/timeout.sh@84 -- # rpc_pid=89927
00:24:52.807   06:35:09	-- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:24:52.807   06:35:09	-- host/timeout.sh@86 -- # sleep 1
00:24:52.807  Running I/O for 10 seconds...
00:24:53.743   06:35:10	-- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:24:54.004  [2024-12-16 06:35:10.869608] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869658] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869670] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869680] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869688] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869697] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869706] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869714] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869722] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869730] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869738] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869747] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869755] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869763] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869770] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869778] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869787] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869795] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869802] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869810] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869817] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869825] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869833] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869840] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869863] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869871] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869878] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869885] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869893] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869921] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.869937] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set
00:24:54.004  [2024-12-16 06:35:10.870194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.004  [2024-12-16 06:35:10.870246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.004  [2024-12-16 06:35:10.870264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.004  [2024-12-16 06:35:10.870273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.004  [2024-12-16 06:35:10.870283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.004  [2024-12-16 06:35:10.870291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:121976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:121984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:121992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:122040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:122048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:122120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.005  [2024-12-16 06:35:10.870842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:122784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.005  [2024-12-16 06:35:10.870859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.005  [2024-12-16 06:35:10.870875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.005  [2024-12-16 06:35:10.870898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:122808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.005  [2024-12-16 06:35:10.870914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:122832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.005  [2024-12-16 06:35:10.870975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.870984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.870991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.005  [2024-12-16 06:35:10.871000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.005  [2024-12-16 06:35:10.871007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.006  [2024-12-16 06:35:10.871267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:122864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.006  [2024-12-16 06:35:10.871301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.006  [2024-12-16 06:35:10.871335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:122912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.006  [2024-12-16 06:35:10.871432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.006  [2024-12-16 06:35:10.871448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.006  [2024-12-16 06:35:10.871466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.006  [2024-12-16 06:35:10.871560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.006  [2024-12-16 06:35:10.871595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.006  [2024-12-16 06:35:10.871614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:123008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.006  [2024-12-16 06:35:10.871648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.006  [2024-12-16 06:35:10.871665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:123032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.006  [2024-12-16 06:35:10.871682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.006  [2024-12-16 06:35:10.871698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.006  [2024-12-16 06:35:10.871707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.006  [2024-12-16 06:35:10.871715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.871725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:123056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.871732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.871742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:123064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.871750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.871759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:123072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.871766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.871774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.007  [2024-12-16 06:35:10.871783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.871791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.007  [2024-12-16 06:35:10.871799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.871808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.871815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.871825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.007  [2024-12-16 06:35:10.871832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.871841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:123112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.871849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.871858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.871865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.871881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:123128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.871890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.871899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:123136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.871907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.871916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.007  [2024-12-16 06:35:10.871924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.871933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.007  [2024-12-16 06:35:10.871940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.871950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.007  [2024-12-16 06:35:10.871962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.871971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.007  [2024-12-16 06:35:10.871978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.871987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.007  [2024-12-16 06:35:10.871995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.872004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.872017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.872026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:123192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.872040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.872049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.007  [2024-12-16 06:35:10.872057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.872066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:123208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.872073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.872082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.872090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.872099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.872106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.872114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.872120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.872129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.872136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.872145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.007  [2024-12-16 06:35:10.872152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.872160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:123256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.872167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.872177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:123264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.872183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.872192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:24:54.007  [2024-12-16 06:35:10.872199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.872207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.872215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.872223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:122520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.872236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.872245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.872252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.872261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.872269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.872278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.872285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.872294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.872310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.872318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.872326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.872335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.872342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.872351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.872358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.872366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.872374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.872382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.872389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.872398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.007  [2024-12-16 06:35:10.872405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.007  [2024-12-16 06:35:10.872414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.008  [2024-12-16 06:35:10.872422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.008  [2024-12-16 06:35:10.872430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.008  [2024-12-16 06:35:10.872437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.008  [2024-12-16 06:35:10.872446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:24:54.008  [2024-12-16 06:35:10.872454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.008  [2024-12-16 06:35:10.872462] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615050 is same with the state(5) to be set
00:24:54.008  [2024-12-16 06:35:10.872472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:24:54.008  [2024-12-16 06:35:10.872479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:24:54.008  [2024-12-16 06:35:10.872494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122744 len:8 PRP1 0x0 PRP2 0x0
00:24:54.008  [2024-12-16 06:35:10.872507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.008  [2024-12-16 06:35:10.872574] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x615050 was disconnected and freed. reset controller.
00:24:54.008  [2024-12-16 06:35:10.872642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:24:54.008  [2024-12-16 06:35:10.872664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.008  [2024-12-16 06:35:10.872674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:24:54.008  [2024-12-16 06:35:10.872681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.008  [2024-12-16 06:35:10.872689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:24:54.008  [2024-12-16 06:35:10.872697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.008  [2024-12-16 06:35:10.872705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:24:54.008  [2024-12-16 06:35:10.872719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:24:54.008  [2024-12-16 06:35:10.872727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59fdc0 is same with the state(5) to be set
00:24:54.008  [2024-12-16 06:35:10.872907] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller
00:24:54.008  [2024-12-16 06:35:10.872929] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59fdc0 (9): Bad file descriptor
00:24:54.008  [2024-12-16 06:35:10.873003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:24:54.008  [2024-12-16 06:35:10.873045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:24:54.008  [2024-12-16 06:35:10.873059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59fdc0 with addr=10.0.0.2, port=4420
00:24:54.008  [2024-12-16 06:35:10.873068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59fdc0 is same with the state(5) to be set
00:24:54.008  [2024-12-16 06:35:10.873083] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59fdc0 (9): Bad file descriptor
00:24:54.008  [2024-12-16 06:35:10.873096] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state
00:24:54.008  [2024-12-16 06:35:10.873104] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed
00:24:54.008  [2024-12-16 06:35:10.873114] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state.
00:24:54.008  [2024-12-16 06:35:10.873130] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:24:54.008  [2024-12-16 06:35:10.883344] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller
00:24:54.008   06:35:10	-- host/timeout.sh@90 -- # sleep 1
00:24:54.944  [2024-12-16 06:35:11.883450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:24:54.944  [2024-12-16 06:35:11.883526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:24:54.944  [2024-12-16 06:35:11.883543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59fdc0 with addr=10.0.0.2, port=4420
00:24:54.944  [2024-12-16 06:35:11.883553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59fdc0 is same with the state(5) to be set
00:24:54.944  [2024-12-16 06:35:11.883571] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59fdc0 (9): Bad file descriptor
00:24:54.944  [2024-12-16 06:35:11.883585] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state
00:24:54.944  [2024-12-16 06:35:11.883593] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed
00:24:54.944  [2024-12-16 06:35:11.883601] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state.
00:24:54.944  [2024-12-16 06:35:11.883618] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:24:54.944  [2024-12-16 06:35:11.883628] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller
00:24:54.944   06:35:11	-- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:24:55.203  [2024-12-16 06:35:12.144805] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:24:55.203   06:35:12	-- host/timeout.sh@92 -- # wait 89927
00:24:56.138  [2024-12-16 06:35:12.901120] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:25:04.257  
00:25:04.257                                                                                                  Latency(us)
00:25:04.257  
[2024-12-16T06:35:21.233Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:04.257  
[2024-12-16T06:35:21.233Z]  Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:25:04.257  	 Verification LBA range: start 0x0 length 0x4000
00:25:04.257  	 NVMe0n1             :      10.00   10411.28      40.67       0.00     0.00   12275.69     923.46 3019898.88
00:25:04.257  
[2024-12-16T06:35:21.233Z]  ===================================================================================================================
00:25:04.257  
[2024-12-16T06:35:21.233Z]  Total                       :              10411.28      40.67       0.00     0.00   12275.69     923.46 3019898.88
00:25:04.257  0
00:25:04.257   06:35:19	-- host/timeout.sh@97 -- # rpc_pid=90048
00:25:04.257   06:35:19	-- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:25:04.257   06:35:19	-- host/timeout.sh@98 -- # sleep 1
00:25:04.257  Running I/O for 10 seconds...
00:25:04.257   06:35:20	-- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:25:04.257  [2024-12-16 06:35:21.069562] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.257  [2024-12-16 06:35:21.069642] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.257  [2024-12-16 06:35:21.069653] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069662] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069670] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069678] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069686] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069693] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069702] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069710] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069719] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069727] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069735] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069742] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069750] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069758] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069766] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069774] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069781] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069789] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069808] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069816] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069830] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069837] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069845] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069865] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069873] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069891] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069900] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069907] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069914] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069944] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069952] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069959] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069967] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069974] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069988] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.069996] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.070004] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.070012] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.070020] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.070027] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.070035] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.070042] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.070050] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ec70 is same with the state(5) to be set
00:25:04.258  [2024-12-16 06:35:21.070365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.258  [2024-12-16 06:35:21.070401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.258  [2024-12-16 06:35:21.070423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.258  [2024-12-16 06:35:21.070433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.258  [2024-12-16 06:35:21.070443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.258  [2024-12-16 06:35:21.070451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.258  [2024-12-16 06:35:21.070461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.258  [2024-12-16 06:35:21.070468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.258  [2024-12-16 06:35:21.070477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.258  [2024-12-16 06:35:21.070497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.258  [2024-12-16 06:35:21.070508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.258  [2024-12-16 06:35:21.070518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.258  [2024-12-16 06:35:21.070527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.258  [2024-12-16 06:35:21.070544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.258  [2024-12-16 06:35:21.070555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.258  [2024-12-16 06:35:21.070563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.258  [2024-12-16 06:35:21.070572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.258  [2024-12-16 06:35:21.070580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.258  [2024-12-16 06:35:21.070589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.258  [2024-12-16 06:35:21.070596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.258  [2024-12-16 06:35:21.070606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.258  [2024-12-16 06:35:21.070613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.258  [2024-12-16 06:35:21.070624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.258  [2024-12-16 06:35:21.070631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.258  [2024-12-16 06:35:21.070640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.258  [2024-12-16 06:35:21.070647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.258  [2024-12-16 06:35:21.070656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.258  [2024-12-16 06:35:21.070663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.258  [2024-12-16 06:35:21.070671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.258  [2024-12-16 06:35:21.070679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.258  [2024-12-16 06:35:21.070688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.258  [2024-12-16 06:35:21.070695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.258  [2024-12-16 06:35:21.070704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.258  [2024-12-16 06:35:21.070715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.258  [2024-12-16 06:35:21.070724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.258  [2024-12-16 06:35:21.070732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.070741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.070748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.070757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.070765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.070774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.070781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.070790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.070798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.070807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.070814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.070823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.070831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.070840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.070847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.070856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.070864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.070872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.070879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.070888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.070895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.070904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.070921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.070929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.070947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.070956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.070964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.070973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.259  [2024-12-16 06:35:21.070981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.070990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.259  [2024-12-16 06:35:21.070999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.071008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.259  [2024-12-16 06:35:21.071016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.071025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.071032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.071042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.259  [2024-12-16 06:35:21.071051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.071060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.071068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.071077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.071085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.071094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.071101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.071111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.259  [2024-12-16 06:35:21.071118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.071127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.071134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.071143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.071150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.071159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.071166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.071174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.071181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.071190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.071197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.071206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.071213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.071221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.259  [2024-12-16 06:35:21.071229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.071238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.259  [2024-12-16 06:35:21.071246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.071255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.071263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.071273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.071281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.071290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.071298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.071307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.071315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.071324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.071332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.071341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.071349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.071358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.071365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.071374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.071381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.071389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.259  [2024-12-16 06:35:21.071397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.259  [2024-12-16 06:35:21.071405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.260  [2024-12-16 06:35:21.071413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.260  [2024-12-16 06:35:21.071429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.260  [2024-12-16 06:35:21.071445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.260  [2024-12-16 06:35:21.071461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.260  [2024-12-16 06:35:21.071477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.260  [2024-12-16 06:35:21.071515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.260  [2024-12-16 06:35:21.071535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.260  [2024-12-16 06:35:21.071553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.260  [2024-12-16 06:35:21.071570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.260  [2024-12-16 06:35:21.071587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.260  [2024-12-16 06:35:21.071605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.260  [2024-12-16 06:35:21.071622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.260  [2024-12-16 06:35:21.071640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.260  [2024-12-16 06:35:21.071657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.260  [2024-12-16 06:35:21.071673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.260  [2024-12-16 06:35:21.071690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.260  [2024-12-16 06:35:21.071706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.260  [2024-12-16 06:35:21.071722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.260  [2024-12-16 06:35:21.071737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.260  [2024-12-16 06:35:21.071755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.260  [2024-12-16 06:35:21.071772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.260  [2024-12-16 06:35:21.071789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.260  [2024-12-16 06:35:21.071807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.260  [2024-12-16 06:35:21.071823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.260  [2024-12-16 06:35:21.071839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.260  [2024-12-16 06:35:21.071860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.260  [2024-12-16 06:35:21.071888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.260  [2024-12-16 06:35:21.071904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.260  [2024-12-16 06:35:21.071920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.260  [2024-12-16 06:35:21.071936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.260  [2024-12-16 06:35:21.071951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.260  [2024-12-16 06:35:21.071967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.260  [2024-12-16 06:35:21.071983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.071991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.260  [2024-12-16 06:35:21.072000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.072009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.260  [2024-12-16 06:35:21.072016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.072025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.260  [2024-12-16 06:35:21.072033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.072042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.260  [2024-12-16 06:35:21.072049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.072058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.260  [2024-12-16 06:35:21.072066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.072076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.260  [2024-12-16 06:35:21.072084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.072094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.260  [2024-12-16 06:35:21.072103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.260  [2024-12-16 06:35:21.072111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.261  [2024-12-16 06:35:21.072119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.261  [2024-12-16 06:35:21.072135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.261  [2024-12-16 06:35:21.072152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.261  [2024-12-16 06:35:21.072168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.261  [2024-12-16 06:35:21.072185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:130864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.261  [2024-12-16 06:35:21.072201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.261  [2024-12-16 06:35:21.072217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.261  [2024-12-16 06:35:21.072233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:130912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.261  [2024-12-16 06:35:21.072248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.261  [2024-12-16 06:35:21.072265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.261  [2024-12-16 06:35:21.072283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.261  [2024-12-16 06:35:21.072300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.261  [2024-12-16 06:35:21.072317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.261  [2024-12-16 06:35:21.072333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.261  [2024-12-16 06:35:21.072349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.261  [2024-12-16 06:35:21.072367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.261  [2024-12-16 06:35:21.072385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.261  [2024-12-16 06:35:21.072401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.261  [2024-12-16 06:35:21.072418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:25:04.261  [2024-12-16 06:35:21.072434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.261  [2024-12-16 06:35:21.072452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.261  [2024-12-16 06:35:21.072469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.261  [2024-12-16 06:35:21.072496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.261  [2024-12-16 06:35:21.072515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.261  [2024-12-16 06:35:21.072532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.261  [2024-12-16 06:35:21.072549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.261  [2024-12-16 06:35:21.072566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.261  [2024-12-16 06:35:21.072582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.261  [2024-12-16 06:35:21.072599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:04.261  [2024-12-16 06:35:21.072616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072625] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x610f90 is same with the state(5) to be set
00:25:04.261  [2024-12-16 06:35:21.072636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:25:04.261  [2024-12-16 06:35:21.072642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:25:04.261  [2024-12-16 06:35:21.072660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:8 PRP1 0x0 PRP2 0x0
00:25:04.261  [2024-12-16 06:35:21.072668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:04.261  [2024-12-16 06:35:21.072729] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x610f90 was disconnected and freed. reset controller.
00:25:04.261  [2024-12-16 06:35:21.072926] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller
00:25:04.261  [2024-12-16 06:35:21.073004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59fdc0 (9): Bad file descriptor
00:25:04.261  [2024-12-16 06:35:21.073119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:04.261  [2024-12-16 06:35:21.073162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:04.261  [2024-12-16 06:35:21.073177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59fdc0 with addr=10.0.0.2, port=4420
00:25:04.261  [2024-12-16 06:35:21.073186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59fdc0 is same with the state(5) to be set
00:25:04.261  [2024-12-16 06:35:21.073202] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59fdc0 (9): Bad file descriptor
00:25:04.261  [2024-12-16 06:35:21.073216] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state
00:25:04.261  [2024-12-16 06:35:21.073226] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed
00:25:04.261  [2024-12-16 06:35:21.073235] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state.
00:25:04.261  [2024-12-16 06:35:21.073253] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:25:04.261  [2024-12-16 06:35:21.073263] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller
00:25:04.261   06:35:21	-- host/timeout.sh@101 -- # sleep 3
00:25:05.198  [2024-12-16 06:35:22.073325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:05.198  [2024-12-16 06:35:22.073399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:05.198  [2024-12-16 06:35:22.073415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59fdc0 with addr=10.0.0.2, port=4420
00:25:05.198  [2024-12-16 06:35:22.073425] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59fdc0 is same with the state(5) to be set
00:25:05.198  [2024-12-16 06:35:22.073441] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59fdc0 (9): Bad file descriptor
00:25:05.198  [2024-12-16 06:35:22.073455] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state
00:25:05.198  [2024-12-16 06:35:22.073463] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed
00:25:05.198  [2024-12-16 06:35:22.073471] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state.
00:25:05.198  [2024-12-16 06:35:22.073500] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:25:05.198  [2024-12-16 06:35:22.073512] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller
00:25:06.134  [2024-12-16 06:35:23.073587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:06.134  [2024-12-16 06:35:23.073660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:06.134  [2024-12-16 06:35:23.073676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59fdc0 with addr=10.0.0.2, port=4420
00:25:06.134  [2024-12-16 06:35:23.073686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59fdc0 is same with the state(5) to be set
00:25:06.134  [2024-12-16 06:35:23.073701] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59fdc0 (9): Bad file descriptor
00:25:06.134  [2024-12-16 06:35:23.073715] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state
00:25:06.134  [2024-12-16 06:35:23.073724] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed
00:25:06.134  [2024-12-16 06:35:23.073731] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state.
00:25:06.134  [2024-12-16 06:35:23.073747] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:25:06.134  [2024-12-16 06:35:23.073757] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller
00:25:07.510  [2024-12-16 06:35:24.075377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:07.510  [2024-12-16 06:35:24.075455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:07.510  [2024-12-16 06:35:24.075470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59fdc0 with addr=10.0.0.2, port=4420
00:25:07.510  [2024-12-16 06:35:24.075480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59fdc0 is same with the state(5) to be set
00:25:07.510  [2024-12-16 06:35:24.075621] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59fdc0 (9): Bad file descriptor
00:25:07.510  [2024-12-16 06:35:24.075765] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state
00:25:07.510  [2024-12-16 06:35:24.075776] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed
00:25:07.510  [2024-12-16 06:35:24.075784] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state.
00:25:07.510  [2024-12-16 06:35:24.077702] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:25:07.511  [2024-12-16 06:35:24.077726] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller
00:25:07.511   06:35:24	-- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:25:07.511  [2024-12-16 06:35:24.336632] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:25:07.511   06:35:24	-- host/timeout.sh@103 -- # wait 90048
00:25:08.446  [2024-12-16 06:35:25.102470] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:25:13.796  
00:25:13.796                                                                                                  Latency(us)
00:25:13.796  
[2024-12-16T06:35:30.772Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:13.796  
[2024-12-16T06:35:30.772Z]  Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:25:13.796  	 Verification LBA range: start 0x0 length 0x4000
00:25:13.796  	 NVMe0n1             :      10.01    8871.83      34.66    7469.39     0.00    7821.83     584.61 3019898.88
00:25:13.796  
[2024-12-16T06:35:30.772Z]  ===================================================================================================================
00:25:13.796  
[2024-12-16T06:35:30.772Z]  Total                       :               8871.83      34.66    7469.39     0.00    7821.83       0.00 3019898.88
00:25:13.796  0
00:25:13.796   06:35:29	-- host/timeout.sh@105 -- # killprocess 89885
00:25:13.796   06:35:29	-- common/autotest_common.sh@936 -- # '[' -z 89885 ']'
00:25:13.796   06:35:29	-- common/autotest_common.sh@940 -- # kill -0 89885
00:25:13.796    06:35:29	-- common/autotest_common.sh@941 -- # uname
00:25:13.796   06:35:29	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:25:13.796    06:35:29	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89885
00:25:13.796  killing process with pid 89885
00:25:13.796  Received shutdown signal, test time was about 10.000000 seconds
00:25:13.796  
00:25:13.796                                                                                                  Latency(us)
00:25:13.796  
[2024-12-16T06:35:30.772Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:13.796  
[2024-12-16T06:35:30.772Z]  ===================================================================================================================
00:25:13.796  
[2024-12-16T06:35:30.772Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:25:13.796   06:35:29	-- common/autotest_common.sh@942 -- # process_name=reactor_2
00:25:13.796   06:35:29	-- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']'
00:25:13.796   06:35:29	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 89885'
00:25:13.796   06:35:29	-- common/autotest_common.sh@955 -- # kill 89885
00:25:13.796   06:35:29	-- common/autotest_common.sh@960 -- # wait 89885
00:25:13.796  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:25:13.796   06:35:30	-- host/timeout.sh@110 -- # bdevperf_pid=90176
00:25:13.796   06:35:30	-- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f
00:25:13.796   06:35:30	-- host/timeout.sh@112 -- # waitforlisten 90176 /var/tmp/bdevperf.sock
00:25:13.796   06:35:30	-- common/autotest_common.sh@829 -- # '[' -z 90176 ']'
00:25:13.796   06:35:30	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:25:13.796   06:35:30	-- common/autotest_common.sh@834 -- # local max_retries=100
00:25:13.796   06:35:30	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:25:13.796   06:35:30	-- common/autotest_common.sh@838 -- # xtrace_disable
00:25:13.796   06:35:30	-- common/autotest_common.sh@10 -- # set +x
00:25:13.796  [2024-12-16 06:35:30.361733] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:25:13.796  [2024-12-16 06:35:30.361837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90176 ]
00:25:13.796  [2024-12-16 06:35:30.494712] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:13.796  [2024-12-16 06:35:30.582083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:25:14.361   06:35:31	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:25:14.361   06:35:31	-- common/autotest_common.sh@862 -- # return 0
00:25:14.361   06:35:31	-- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 90176 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt
00:25:14.361   06:35:31	-- host/timeout.sh@116 -- # dtrace_pid=90204
00:25:14.361   06:35:31	-- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9
00:25:14.620   06:35:31	-- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2
00:25:15.186  NVMe0n1
00:25:15.186   06:35:31	-- host/timeout.sh@124 -- # rpc_pid=90256
00:25:15.186   06:35:31	-- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:25:15.187   06:35:31	-- host/timeout.sh@125 -- # sleep 1
00:25:15.187  Running I/O for 10 seconds...
00:25:16.123   06:35:32	-- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:25:16.384  [2024-12-16 06:35:33.132860] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.384  [2024-12-16 06:35:33.132939] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.384  [2024-12-16 06:35:33.132950] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.384  [2024-12-16 06:35:33.132959] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.384  [2024-12-16 06:35:33.132966] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.384  [2024-12-16 06:35:33.132975] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.384  [2024-12-16 06:35:33.132982] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.384  [2024-12-16 06:35:33.132991] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.384  [2024-12-16 06:35:33.132998] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.384  [2024-12-16 06:35:33.133005] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.384  [2024-12-16 06:35:33.133013] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.384  [2024-12-16 06:35:33.133020] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.384  [2024-12-16 06:35:33.133027] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.384  [2024-12-16 06:35:33.133035] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.384  [2024-12-16 06:35:33.133042] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.384  [2024-12-16 06:35:33.133049] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.384  [2024-12-16 06:35:33.133057] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.384  [2024-12-16 06:35:33.133064] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.384  [2024-12-16 06:35:33.133072] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.384  [2024-12-16 06:35:33.133080] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.384  [2024-12-16 06:35:33.133088] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.384  [2024-12-16 06:35:33.133097] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.384  [2024-12-16 06:35:33.133104] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.384  [2024-12-16 06:35:33.133112] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.384  [2024-12-16 06:35:33.133119] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133141] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133148] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133155] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133162] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133169] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133176] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133183] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133190] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133197] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133205] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133213] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133220] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133228] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133235] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133242] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133249] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133256] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133264] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133272] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133281] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133289] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133296] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133303] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133310] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133318] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133325] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133332] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133339] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133346] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133353] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133360] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133367] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133373] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133381] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133387] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133394] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133401] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133408] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133415] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133422] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133429] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133437] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133444] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133451] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133458] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133465] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133474] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133481] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133488] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133511] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133550] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133559] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133567] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133575] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133582] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133590] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133598] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133606] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133613] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133621] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133629] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133637] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133645] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133652] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133660] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133668] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133676] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133683] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133691] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133699] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133706] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133714] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133722] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133731] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133739] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133748] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133756] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133764] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133772] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133780] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133788] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133805] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133813] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133821] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.385  [2024-12-16 06:35:33.133845] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.386  [2024-12-16 06:35:33.133853] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.386  [2024-12-16 06:35:33.133860] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.386  [2024-12-16 06:35:33.133883] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.386  [2024-12-16 06:35:33.133907] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.386  [2024-12-16 06:35:33.133914] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.386  [2024-12-16 06:35:33.133922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.386  [2024-12-16 06:35:33.133929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.386  [2024-12-16 06:35:33.133936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.386  [2024-12-16 06:35:33.133943] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.386  [2024-12-16 06:35:33.133951] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.386  [2024-12-16 06:35:33.133958] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.386  [2024-12-16 06:35:33.133965] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.386  [2024-12-16 06:35:33.133972] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.386  [2024-12-16 06:35:33.133979] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.386  [2024-12-16 06:35:33.133986] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32400 is same with the state(5) to be set
00:25:16.386  [2024-12-16 06:35:33.134212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:25:16.386  [2024-12-16 06:35:33.134251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:25:16.386  [2024-12-16 06:35:33.134270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:25:16.386  [2024-12-16 06:35:33.134285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:25:16.386  [2024-12-16 06:35:33.134303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6acdc0 is same with the state(5) to be set
00:25:16.386  [2024-12-16 06:35:33.134353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:88328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:89816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:47624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:68472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:29000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:116664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:69408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.386  [2024-12-16 06:35:33.134912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.386  [2024-12-16 06:35:33.134921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.134929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.134938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.134946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.134956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.134963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.134973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.134981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.134991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.134998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:28632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:125496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:27384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.387  [2024-12-16 06:35:33.135618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.387  [2024-12-16 06:35:33.135628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.135637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.135646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.135653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.135662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.135676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.135685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:116352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.135693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.135701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.135709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.135718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.135725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.135734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.135742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.135758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:52792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.135765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.135774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.135782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.135790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.135798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.135807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:124312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.135814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.135823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.135830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.135839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:34192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.135847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.135855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.135863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.135871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.135883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.135892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.135899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.135908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.135915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.135926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.135934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.135942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.135955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.135964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.135973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.135982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.135989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.135998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:108640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.136006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.136015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:119360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.136022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.136036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.136045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.136054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.136061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.136070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.136077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.136087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.136094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.136103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:51928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.136110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.136118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:114184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.136126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.136134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.136142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.136150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.136157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.136166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.136173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.136182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.136189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.136198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.136205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.136214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.136228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.136237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.136244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.136253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.136261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.136271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.136279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.136288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.136296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.136310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.136318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.388  [2024-12-16 06:35:33.136327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.388  [2024-12-16 06:35:33.136334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.389  [2024-12-16 06:35:33.136343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.389  [2024-12-16 06:35:33.136351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.389  [2024-12-16 06:35:33.136360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.389  [2024-12-16 06:35:33.136367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.389  [2024-12-16 06:35:33.136376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.389  [2024-12-16 06:35:33.136383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.389  [2024-12-16 06:35:33.136391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.389  [2024-12-16 06:35:33.136410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.389  [2024-12-16 06:35:33.136419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.389  [2024-12-16 06:35:33.136426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.389  [2024-12-16 06:35:33.136435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.389  [2024-12-16 06:35:33.136442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.389  [2024-12-16 06:35:33.136451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.389  [2024-12-16 06:35:33.136458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.389  [2024-12-16 06:35:33.136467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:127432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.389  [2024-12-16 06:35:33.136475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.389  [2024-12-16 06:35:33.136492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.389  [2024-12-16 06:35:33.136505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.389  [2024-12-16 06:35:33.136514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.389  [2024-12-16 06:35:33.136533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.389  [2024-12-16 06:35:33.136557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:50232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.389  [2024-12-16 06:35:33.136565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.389  [2024-12-16 06:35:33.136574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.389  [2024-12-16 06:35:33.136582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.389  [2024-12-16 06:35:33.136591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.389  [2024-12-16 06:35:33.136599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.389  [2024-12-16 06:35:33.136608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.389  [2024-12-16 06:35:33.136615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.389  [2024-12-16 06:35:33.136625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.389  [2024-12-16 06:35:33.136634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.389  [2024-12-16 06:35:33.136642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.389  [2024-12-16 06:35:33.136650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.389  [2024-12-16 06:35:33.136659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:16.389  [2024-12-16 06:35:33.136666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.389  [2024-12-16 06:35:33.136674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x722050 is same with the state(5) to be set
00:25:16.389  [2024-12-16 06:35:33.136683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:25:16.389  [2024-12-16 06:35:33.136690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:25:16.389  [2024-12-16 06:35:33.136696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69648 len:8 PRP1 0x0 PRP2 0x0
00:25:16.389  [2024-12-16 06:35:33.136704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:16.389  [2024-12-16 06:35:33.136751] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x722050 was disconnected and freed. reset controller.
00:25:16.389  [2024-12-16 06:35:33.136961] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller
00:25:16.389  [2024-12-16 06:35:33.136984] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6acdc0 (9): Bad file descriptor
00:25:16.389  [2024-12-16 06:35:33.137062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:16.389  [2024-12-16 06:35:33.137105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:16.389  [2024-12-16 06:35:33.137121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6acdc0 with addr=10.0.0.2, port=4420
00:25:16.389  [2024-12-16 06:35:33.137129] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6acdc0 is same with the state(5) to be set
00:25:16.389  [2024-12-16 06:35:33.137145] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6acdc0 (9): Bad file descriptor
00:25:16.389  [2024-12-16 06:35:33.137158] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state
00:25:16.389  [2024-12-16 06:35:33.137165] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed
00:25:16.389  [2024-12-16 06:35:33.137173] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state.
00:25:16.389  [2024-12-16 06:35:33.152207] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:25:16.389  [2024-12-16 06:35:33.152249] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller
00:25:16.389   06:35:33	-- host/timeout.sh@128 -- # wait 90256
00:25:18.291  [2024-12-16 06:35:35.152336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:18.291  [2024-12-16 06:35:35.152413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:18.291  [2024-12-16 06:35:35.152430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6acdc0 with addr=10.0.0.2, port=4420
00:25:18.291  [2024-12-16 06:35:35.152440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6acdc0 is same with the state(5) to be set
00:25:18.291  [2024-12-16 06:35:35.152456] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6acdc0 (9): Bad file descriptor
00:25:18.291  [2024-12-16 06:35:35.152471] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state
00:25:18.291  [2024-12-16 06:35:35.152479] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed
00:25:18.291  [2024-12-16 06:35:35.152499] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state.
00:25:18.291  [2024-12-16 06:35:35.152517] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:25:18.291  [2024-12-16 06:35:35.152527] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller
00:25:20.221  [2024-12-16 06:35:37.152604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:20.221  [2024-12-16 06:35:37.152680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:25:20.221  [2024-12-16 06:35:37.152706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6acdc0 with addr=10.0.0.2, port=4420
00:25:20.221  [2024-12-16 06:35:37.152717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6acdc0 is same with the state(5) to be set
00:25:20.221  [2024-12-16 06:35:37.152733] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6acdc0 (9): Bad file descriptor
00:25:20.221  [2024-12-16 06:35:37.152748] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state
00:25:20.221  [2024-12-16 06:35:37.152756] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed
00:25:20.221  [2024-12-16 06:35:37.152764] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state.
00:25:20.221  [2024-12-16 06:35:37.152782] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:25:20.221  [2024-12-16 06:35:37.152791] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller
00:25:22.753  [2024-12-16 06:35:39.152832] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state.
00:25:22.753  [2024-12-16 06:35:39.152874] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state
00:25:22.753  [2024-12-16 06:35:39.152883] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed
00:25:22.753  [2024-12-16 06:35:39.152891] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state
00:25:22.753  [2024-12-16 06:35:39.152906] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:25:23.320  
00:25:23.320                                                                                                  Latency(us)
00:25:23.320  
[2024-12-16T06:35:40.296Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:23.320  
[2024-12-16T06:35:40.296Z]  Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096)
00:25:23.320  	 NVMe0n1             :       8.13    2967.36      11.59      15.75     0.00   42879.38    1861.82 7015926.69
00:25:23.320  
[2024-12-16T06:35:40.297Z]  ===================================================================================================================
00:25:23.321  
[2024-12-16T06:35:40.297Z]  Total                       :               2967.36      11.59      15.75     0.00   42879.38    1861.82 7015926.69
00:25:23.321  0
00:25:23.321   06:35:40	-- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:25:23.321  Attaching 5 probes...
00:25:23.321  1376.625468: reset bdev controller NVMe0
00:25:23.321  1376.685864: reconnect bdev controller NVMe0
00:25:23.321  3391.945926: reconnect delay bdev controller NVMe0
00:25:23.321  3391.960783: reconnect bdev controller NVMe0
00:25:23.321  5392.217428: reconnect delay bdev controller NVMe0
00:25:23.321  5392.229916: reconnect bdev controller NVMe0
00:25:23.321  7392.480268: reconnect delay bdev controller NVMe0
00:25:23.321  7392.492938: reconnect bdev controller NVMe0
00:25:23.321    06:35:40	-- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0'
00:25:23.321   06:35:40	-- host/timeout.sh@132 -- # (( 3 <= 2 ))
00:25:23.321   06:35:40	-- host/timeout.sh@136 -- # kill 90204
00:25:23.321   06:35:40	-- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:25:23.321   06:35:40	-- host/timeout.sh@139 -- # killprocess 90176
00:25:23.321   06:35:40	-- common/autotest_common.sh@936 -- # '[' -z 90176 ']'
00:25:23.321   06:35:40	-- common/autotest_common.sh@940 -- # kill -0 90176
00:25:23.321    06:35:40	-- common/autotest_common.sh@941 -- # uname
00:25:23.321   06:35:40	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:25:23.321    06:35:40	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90176
00:25:23.321  killing process with pid 90176
00:25:23.321  Received shutdown signal, test time was about 8.192724 seconds
00:25:23.321  
00:25:23.321                                                                                                  Latency(us)
00:25:23.321  
[2024-12-16T06:35:40.297Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:23.321  
[2024-12-16T06:35:40.297Z]  ===================================================================================================================
00:25:23.321  
[2024-12-16T06:35:40.297Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:25:23.321   06:35:40	-- common/autotest_common.sh@942 -- # process_name=reactor_2
00:25:23.321   06:35:40	-- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']'
00:25:23.321   06:35:40	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 90176'
00:25:23.321   06:35:40	-- common/autotest_common.sh@955 -- # kill 90176
00:25:23.321   06:35:40	-- common/autotest_common.sh@960 -- # wait 90176
00:25:23.579   06:35:40	-- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:25:23.838   06:35:40	-- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT
00:25:23.838   06:35:40	-- host/timeout.sh@145 -- # nvmftestfini
00:25:23.838   06:35:40	-- nvmf/common.sh@476 -- # nvmfcleanup
00:25:23.838   06:35:40	-- nvmf/common.sh@116 -- # sync
00:25:23.838   06:35:40	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:25:23.838   06:35:40	-- nvmf/common.sh@119 -- # set +e
00:25:23.838   06:35:40	-- nvmf/common.sh@120 -- # for i in {1..20}
00:25:23.838   06:35:40	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:25:23.838  rmmod nvme_tcp
00:25:23.838  rmmod nvme_fabrics
00:25:23.838  rmmod nvme_keyring
00:25:23.838   06:35:40	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:25:23.838   06:35:40	-- nvmf/common.sh@123 -- # set -e
00:25:23.838   06:35:40	-- nvmf/common.sh@124 -- # return 0
00:25:23.838   06:35:40	-- nvmf/common.sh@477 -- # '[' -n 89592 ']'
00:25:23.838   06:35:40	-- nvmf/common.sh@478 -- # killprocess 89592
00:25:23.838   06:35:40	-- common/autotest_common.sh@936 -- # '[' -z 89592 ']'
00:25:23.838   06:35:40	-- common/autotest_common.sh@940 -- # kill -0 89592
00:25:23.838    06:35:40	-- common/autotest_common.sh@941 -- # uname
00:25:23.838   06:35:40	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:25:23.838    06:35:40	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89592
00:25:24.097   06:35:40	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:25:24.098   06:35:40	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:25:24.098  killing process with pid 89592
00:25:24.098   06:35:40	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 89592'
00:25:24.098   06:35:40	-- common/autotest_common.sh@955 -- # kill 89592
00:25:24.098   06:35:40	-- common/autotest_common.sh@960 -- # wait 89592
00:25:24.356   06:35:41	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:25:24.356   06:35:41	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:25:24.356   06:35:41	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:25:24.356   06:35:41	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:25:24.356   06:35:41	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:25:24.356   06:35:41	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:25:24.356   06:35:41	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:25:24.356    06:35:41	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:25:24.356   06:35:41	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:25:24.356  
00:25:24.356  real	0m47.148s
00:25:24.356  user	2m18.001s
00:25:24.356  sys	0m5.273s
00:25:24.356   06:35:41	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:24.356   06:35:41	-- common/autotest_common.sh@10 -- # set +x
00:25:24.356  ************************************
00:25:24.356  END TEST nvmf_timeout
00:25:24.356  ************************************
00:25:24.356   06:35:41	-- nvmf/nvmf.sh@120 -- # [[ virt == phy ]]
00:25:24.356   06:35:41	-- nvmf/nvmf.sh@127 -- # timing_exit host
00:25:24.356   06:35:41	-- common/autotest_common.sh@728 -- # xtrace_disable
00:25:24.356   06:35:41	-- common/autotest_common.sh@10 -- # set +x
00:25:24.356   06:35:41	-- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT
00:25:24.356  ************************************
00:25:24.356  END TEST nvmf_tcp
00:25:24.356  ************************************
00:25:24.356  
00:25:24.356  real	18m37.352s
00:25:24.356  user	59m34.491s
00:25:24.356  sys	3m55.651s
00:25:24.356   06:35:41	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:24.356   06:35:41	-- common/autotest_common.sh@10 -- # set +x
00:25:24.356   06:35:41	-- spdk/autotest.sh@283 -- # [[ 0 -eq 0 ]]
00:25:24.356   06:35:41	-- spdk/autotest.sh@284 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp
00:25:24.356   06:35:41	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:25:24.356   06:35:41	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:24.356   06:35:41	-- common/autotest_common.sh@10 -- # set +x
00:25:24.356  ************************************
00:25:24.356  START TEST spdkcli_nvmf_tcp
00:25:24.356  ************************************
00:25:24.356   06:35:41	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp
00:25:24.615  * Looking for test storage...
00:25:24.616  * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli
00:25:24.616    06:35:41	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:25:24.616     06:35:41	-- common/autotest_common.sh@1690 -- # lcov --version
00:25:24.616     06:35:41	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:25:24.616    06:35:41	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:25:24.616    06:35:41	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:25:24.616    06:35:41	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:25:24.616    06:35:41	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:25:24.616    06:35:41	-- scripts/common.sh@335 -- # IFS=.-:
00:25:24.616    06:35:41	-- scripts/common.sh@335 -- # read -ra ver1
00:25:24.616    06:35:41	-- scripts/common.sh@336 -- # IFS=.-:
00:25:24.616    06:35:41	-- scripts/common.sh@336 -- # read -ra ver2
00:25:24.616    06:35:41	-- scripts/common.sh@337 -- # local 'op=<'
00:25:24.616    06:35:41	-- scripts/common.sh@339 -- # ver1_l=2
00:25:24.616    06:35:41	-- scripts/common.sh@340 -- # ver2_l=1
00:25:24.616    06:35:41	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:25:24.616    06:35:41	-- scripts/common.sh@343 -- # case "$op" in
00:25:24.616    06:35:41	-- scripts/common.sh@344 -- # : 1
00:25:24.616    06:35:41	-- scripts/common.sh@363 -- # (( v = 0 ))
00:25:24.616    06:35:41	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:24.616     06:35:41	-- scripts/common.sh@364 -- # decimal 1
00:25:24.616     06:35:41	-- scripts/common.sh@352 -- # local d=1
00:25:24.616     06:35:41	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:24.616     06:35:41	-- scripts/common.sh@354 -- # echo 1
00:25:24.616    06:35:41	-- scripts/common.sh@364 -- # ver1[v]=1
00:25:24.616     06:35:41	-- scripts/common.sh@365 -- # decimal 2
00:25:24.616     06:35:41	-- scripts/common.sh@352 -- # local d=2
00:25:24.616     06:35:41	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:24.616     06:35:41	-- scripts/common.sh@354 -- # echo 2
00:25:24.616    06:35:41	-- scripts/common.sh@365 -- # ver2[v]=2
00:25:24.616    06:35:41	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:25:24.616    06:35:41	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:25:24.616    06:35:41	-- scripts/common.sh@367 -- # return 0
00:25:24.616    06:35:41	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:24.616    06:35:41	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:25:24.616  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:24.616  		--rc genhtml_branch_coverage=1
00:25:24.616  		--rc genhtml_function_coverage=1
00:25:24.616  		--rc genhtml_legend=1
00:25:24.616  		--rc geninfo_all_blocks=1
00:25:24.616  		--rc geninfo_unexecuted_blocks=1
00:25:24.616  		
00:25:24.616  		'
00:25:24.616    06:35:41	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:25:24.616  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:24.616  		--rc genhtml_branch_coverage=1
00:25:24.616  		--rc genhtml_function_coverage=1
00:25:24.616  		--rc genhtml_legend=1
00:25:24.616  		--rc geninfo_all_blocks=1
00:25:24.616  		--rc geninfo_unexecuted_blocks=1
00:25:24.616  		
00:25:24.616  		'
00:25:24.616    06:35:41	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:25:24.616  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:24.616  		--rc genhtml_branch_coverage=1
00:25:24.616  		--rc genhtml_function_coverage=1
00:25:24.616  		--rc genhtml_legend=1
00:25:24.616  		--rc geninfo_all_blocks=1
00:25:24.616  		--rc geninfo_unexecuted_blocks=1
00:25:24.616  		
00:25:24.616  		'
00:25:24.616    06:35:41	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:25:24.616  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:24.616  		--rc genhtml_branch_coverage=1
00:25:24.616  		--rc genhtml_function_coverage=1
00:25:24.616  		--rc genhtml_legend=1
00:25:24.616  		--rc geninfo_all_blocks=1
00:25:24.616  		--rc geninfo_unexecuted_blocks=1
00:25:24.616  		
00:25:24.616  		'
00:25:24.616   06:35:41	-- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh
00:25:24.616    06:35:41	-- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py
00:25:24.616    06:35:41	-- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py
00:25:24.616   06:35:41	-- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:25:24.616     06:35:41	-- nvmf/common.sh@7 -- # uname -s
00:25:24.616    06:35:41	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:25:24.616    06:35:41	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:25:24.616    06:35:41	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:25:24.616    06:35:41	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:25:24.616    06:35:41	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:25:24.616    06:35:41	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:25:24.616    06:35:41	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:25:24.616    06:35:41	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:25:24.616    06:35:41	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:25:24.616     06:35:41	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:25:24.616    06:35:41	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:25:24.616    06:35:41	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:25:24.616    06:35:41	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:25:24.616    06:35:41	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:25:24.616    06:35:41	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:25:24.616    06:35:41	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:25:24.616     06:35:41	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:25:24.616     06:35:41	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:25:24.616     06:35:41	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:25:24.616      06:35:41	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:24.616      06:35:41	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:24.616      06:35:41	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:24.616      06:35:41	-- paths/export.sh@5 -- # export PATH
00:25:24.616      06:35:41	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:24.616    06:35:41	-- nvmf/common.sh@46 -- # : 0
00:25:24.616    06:35:41	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:25:24.616    06:35:41	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:25:24.616    06:35:41	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:25:24.616    06:35:41	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:25:24.616    06:35:41	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:25:24.616    06:35:41	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:25:24.616    06:35:41	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:25:24.616    06:35:41	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:25:24.616   06:35:41	-- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test
00:25:24.616   06:35:41	-- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf
00:25:24.616   06:35:41	-- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT
00:25:24.616   06:35:41	-- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt
00:25:24.616   06:35:41	-- common/autotest_common.sh@722 -- # xtrace_disable
00:25:24.616   06:35:41	-- common/autotest_common.sh@10 -- # set +x
00:25:24.616   06:35:41	-- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt
00:25:24.616   06:35:41	-- spdkcli/common.sh@33 -- # nvmf_tgt_pid=90492
00:25:24.616   06:35:41	-- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0
00:25:24.616   06:35:41	-- spdkcli/common.sh@34 -- # waitforlisten 90492
00:25:24.616   06:35:41	-- common/autotest_common.sh@829 -- # '[' -z 90492 ']'
00:25:24.616   06:35:41	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:25:24.616   06:35:41	-- common/autotest_common.sh@834 -- # local max_retries=100
00:25:24.616  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:25:24.616   06:35:41	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:25:24.616   06:35:41	-- common/autotest_common.sh@838 -- # xtrace_disable
00:25:24.616   06:35:41	-- common/autotest_common.sh@10 -- # set +x
00:25:24.616  [2024-12-16 06:35:41.542110] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:25:24.616  [2024-12-16 06:35:41.542204] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90492 ]
00:25:24.876  [2024-12-16 06:35:41.680436] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:25:24.876  [2024-12-16 06:35:41.774296] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:25:24.876  [2024-12-16 06:35:41.774639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:25:24.876  [2024-12-16 06:35:41.774652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:25.812   06:35:42	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:25:25.812   06:35:42	-- common/autotest_common.sh@862 -- # return 0
00:25:25.812   06:35:42	-- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt
00:25:25.812   06:35:42	-- common/autotest_common.sh@728 -- # xtrace_disable
00:25:25.812   06:35:42	-- common/autotest_common.sh@10 -- # set +x
00:25:25.812   06:35:42	-- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1
00:25:25.812   06:35:42	-- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]]
00:25:25.812   06:35:42	-- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config
00:25:25.812   06:35:42	-- common/autotest_common.sh@722 -- # xtrace_disable
00:25:25.812   06:35:42	-- common/autotest_common.sh@10 -- # set +x
00:25:25.812   06:35:42	-- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True
00:25:25.812  '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True
00:25:25.812  '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True
00:25:25.812  '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True
00:25:25.812  '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True
00:25:25.812  '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True
00:25:25.812  '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True
00:25:25.812  '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW  max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True
00:25:25.812  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True
00:25:25.812  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True
00:25:25.812  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create  tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True
00:25:25.812  '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True
00:25:25.812  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True
00:25:25.812  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create  tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True
00:25:25.812  '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True
00:25:25.812  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True
00:25:25.812  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create  tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True
00:25:25.812  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create  tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True
00:25:25.812  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create  nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True
00:25:25.812  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create  nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True
00:25:25.812  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\''
00:25:25.812  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True
00:25:25.812  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True
00:25:25.812  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True
00:25:25.812  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True
00:25:25.812  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True
00:25:25.812  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True
00:25:25.812  '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\''
00:25:25.812  '
00:25:26.070  [2024-12-16 06:35:43.015707] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05
00:25:28.604  [2024-12-16 06:35:45.278299] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:25:29.980  [2024-12-16 06:35:46.572247] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 ***
00:25:32.513  [2024-12-16 06:35:48.967657] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 ***
00:25:34.416  [2024-12-16 06:35:51.030461] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 ***
00:25:35.793  Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True]
00:25:35.793  Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True]
00:25:35.793  Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True]
00:25:35.793  Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True]
00:25:35.793  Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True]
00:25:35.793  Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True]
00:25:35.793  Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True]
00:25:35.793  Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW  max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True]
00:25:35.793  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True]
00:25:35.793  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True]
00:25:35.793  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create  tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True]
00:25:35.793  Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True]
00:25:35.793  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True]
00:25:35.793  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create  tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True]
00:25:35.793  Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True]
00:25:35.793  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True]
00:25:35.793  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create  tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True]
00:25:35.793  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create  tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True]
00:25:35.793  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create  nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True]
00:25:35.793  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create  nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True]
00:25:35.793  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False]
00:25:35.793  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True]
00:25:35.793  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True]
00:25:35.793  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True]
00:25:35.793  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True]
00:25:35.793  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True]
00:25:35.793  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True]
00:25:35.793  Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False]
00:25:35.793   06:35:52	-- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config
00:25:35.793   06:35:52	-- common/autotest_common.sh@728 -- # xtrace_disable
00:25:35.793   06:35:52	-- common/autotest_common.sh@10 -- # set +x
00:25:36.051   06:35:52	-- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match
00:25:36.051   06:35:52	-- common/autotest_common.sh@722 -- # xtrace_disable
00:25:36.051   06:35:52	-- common/autotest_common.sh@10 -- # set +x
00:25:36.051   06:35:52	-- spdkcli/nvmf.sh@69 -- # check_match
00:25:36.051   06:35:52	-- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf
00:25:36.310   06:35:53	-- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match
00:25:36.310   06:35:53	-- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test
00:25:36.569   06:35:53	-- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match
00:25:36.569   06:35:53	-- common/autotest_common.sh@728 -- # xtrace_disable
00:25:36.569   06:35:53	-- common/autotest_common.sh@10 -- # set +x
00:25:36.569   06:35:53	-- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config
00:25:36.569   06:35:53	-- common/autotest_common.sh@722 -- # xtrace_disable
00:25:36.569   06:35:53	-- common/autotest_common.sh@10 -- # set +x
00:25:36.569   06:35:53	-- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\''
00:25:36.569  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\''
00:25:36.569  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\''
00:25:36.569  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\''
00:25:36.569  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\''
00:25:36.569  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\''
00:25:36.569  '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\''
00:25:36.569  '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\''
00:25:36.569  '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\''
00:25:36.569  '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\''
00:25:36.569  '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\''
00:25:36.569  '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\''
00:25:36.569  '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\''
00:25:36.569  '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\''
00:25:36.569  '
00:25:41.840  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False]
00:25:41.840  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False]
00:25:41.840  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False]
00:25:41.840  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False]
00:25:41.840  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False]
00:25:41.840  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False]
00:25:41.840  Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False]
00:25:41.840  Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False]
00:25:41.840  Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False]
00:25:41.840  Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False]
00:25:41.840  Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False]
00:25:41.840  Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False]
00:25:41.840  Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False]
00:25:41.840  Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False]
00:25:41.840   06:35:58	-- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config
00:25:41.840   06:35:58	-- common/autotest_common.sh@728 -- # xtrace_disable
00:25:41.840   06:35:58	-- common/autotest_common.sh@10 -- # set +x
00:25:41.840   06:35:58	-- spdkcli/nvmf.sh@90 -- # killprocess 90492
00:25:41.840   06:35:58	-- common/autotest_common.sh@936 -- # '[' -z 90492 ']'
00:25:41.840   06:35:58	-- common/autotest_common.sh@940 -- # kill -0 90492
00:25:41.840    06:35:58	-- common/autotest_common.sh@941 -- # uname
00:25:41.840   06:35:58	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:25:41.840    06:35:58	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90492
00:25:42.099   06:35:58	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:25:42.099   06:35:58	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:25:42.099  killing process with pid 90492
00:25:42.099   06:35:58	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 90492'
00:25:42.099   06:35:58	-- common/autotest_common.sh@955 -- # kill 90492
00:25:42.099  [2024-12-16 06:35:58.829405] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times
00:25:42.099   06:35:58	-- common/autotest_common.sh@960 -- # wait 90492
00:25:42.099   06:35:59	-- spdkcli/nvmf.sh@1 -- # cleanup
00:25:42.099   06:35:59	-- spdkcli/common.sh@10 -- # '[' -n '' ']'
00:25:42.099   06:35:59	-- spdkcli/common.sh@13 -- # '[' -n 90492 ']'
00:25:42.100   06:35:59	-- spdkcli/common.sh@14 -- # killprocess 90492
00:25:42.100   06:35:59	-- common/autotest_common.sh@936 -- # '[' -z 90492 ']'
00:25:42.100   06:35:59	-- common/autotest_common.sh@940 -- # kill -0 90492
00:25:42.100  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (90492) - No such process
00:25:42.100  Process with pid 90492 is not found
00:25:42.100   06:35:59	-- common/autotest_common.sh@963 -- # echo 'Process with pid 90492 is not found'
00:25:42.100   06:35:59	-- spdkcli/common.sh@16 -- # '[' -n '' ']'
00:25:42.100   06:35:59	-- spdkcli/common.sh@19 -- # '[' -n '' ']'
00:25:42.100   06:35:59	-- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio
00:25:42.100  
00:25:42.100  real	0m17.796s
00:25:42.100  user	0m38.473s
00:25:42.100  sys	0m0.909s
00:25:42.100   06:35:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:42.100   06:35:59	-- common/autotest_common.sh@10 -- # set +x
00:25:42.100  ************************************
00:25:42.100  END TEST spdkcli_nvmf_tcp
00:25:42.100  ************************************
00:25:42.359   06:35:59	-- spdk/autotest.sh@285 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp
00:25:42.359   06:35:59	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:25:42.359   06:35:59	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:42.359   06:35:59	-- common/autotest_common.sh@10 -- # set +x
00:25:42.359  ************************************
00:25:42.359  START TEST nvmf_identify_passthru
00:25:42.359  ************************************
00:25:42.359   06:35:59	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp
00:25:42.359  * Looking for test storage...
00:25:42.359  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:25:42.359    06:35:59	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:25:42.359     06:35:59	-- common/autotest_common.sh@1690 -- # lcov --version
00:25:42.359     06:35:59	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:25:42.359    06:35:59	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:25:42.359    06:35:59	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:25:42.359    06:35:59	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:25:42.359    06:35:59	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:25:42.359    06:35:59	-- scripts/common.sh@335 -- # IFS=.-:
00:25:42.359    06:35:59	-- scripts/common.sh@335 -- # read -ra ver1
00:25:42.359    06:35:59	-- scripts/common.sh@336 -- # IFS=.-:
00:25:42.359    06:35:59	-- scripts/common.sh@336 -- # read -ra ver2
00:25:42.359    06:35:59	-- scripts/common.sh@337 -- # local 'op=<'
00:25:42.359    06:35:59	-- scripts/common.sh@339 -- # ver1_l=2
00:25:42.359    06:35:59	-- scripts/common.sh@340 -- # ver2_l=1
00:25:42.359    06:35:59	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:25:42.359    06:35:59	-- scripts/common.sh@343 -- # case "$op" in
00:25:42.359    06:35:59	-- scripts/common.sh@344 -- # : 1
00:25:42.359    06:35:59	-- scripts/common.sh@363 -- # (( v = 0 ))
00:25:42.359    06:35:59	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:42.359     06:35:59	-- scripts/common.sh@364 -- # decimal 1
00:25:42.359     06:35:59	-- scripts/common.sh@352 -- # local d=1
00:25:42.359     06:35:59	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:42.359     06:35:59	-- scripts/common.sh@354 -- # echo 1
00:25:42.359    06:35:59	-- scripts/common.sh@364 -- # ver1[v]=1
00:25:42.359     06:35:59	-- scripts/common.sh@365 -- # decimal 2
00:25:42.359     06:35:59	-- scripts/common.sh@352 -- # local d=2
00:25:42.359     06:35:59	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:42.359     06:35:59	-- scripts/common.sh@354 -- # echo 2
00:25:42.359    06:35:59	-- scripts/common.sh@365 -- # ver2[v]=2
00:25:42.359    06:35:59	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:25:42.359    06:35:59	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:25:42.359    06:35:59	-- scripts/common.sh@367 -- # return 0
00:25:42.359    06:35:59	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:42.359    06:35:59	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:25:42.359  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:42.359  		--rc genhtml_branch_coverage=1
00:25:42.359  		--rc genhtml_function_coverage=1
00:25:42.359  		--rc genhtml_legend=1
00:25:42.359  		--rc geninfo_all_blocks=1
00:25:42.359  		--rc geninfo_unexecuted_blocks=1
00:25:42.359  		
00:25:42.359  		'
00:25:42.359    06:35:59	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:25:42.359  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:42.359  		--rc genhtml_branch_coverage=1
00:25:42.360  		--rc genhtml_function_coverage=1
00:25:42.360  		--rc genhtml_legend=1
00:25:42.360  		--rc geninfo_all_blocks=1
00:25:42.360  		--rc geninfo_unexecuted_blocks=1
00:25:42.360  		
00:25:42.360  		'
00:25:42.360    06:35:59	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:25:42.360  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:42.360  		--rc genhtml_branch_coverage=1
00:25:42.360  		--rc genhtml_function_coverage=1
00:25:42.360  		--rc genhtml_legend=1
00:25:42.360  		--rc geninfo_all_blocks=1
00:25:42.360  		--rc geninfo_unexecuted_blocks=1
00:25:42.360  		
00:25:42.360  		'
00:25:42.360    06:35:59	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:25:42.360  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:42.360  		--rc genhtml_branch_coverage=1
00:25:42.360  		--rc genhtml_function_coverage=1
00:25:42.360  		--rc genhtml_legend=1
00:25:42.360  		--rc geninfo_all_blocks=1
00:25:42.360  		--rc geninfo_unexecuted_blocks=1
00:25:42.360  		
00:25:42.360  		'
00:25:42.360   06:35:59	-- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:25:42.360     06:35:59	-- nvmf/common.sh@7 -- # uname -s
00:25:42.360    06:35:59	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:25:42.360    06:35:59	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:25:42.360    06:35:59	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:25:42.360    06:35:59	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:25:42.360    06:35:59	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:25:42.360    06:35:59	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:25:42.360    06:35:59	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:25:42.360    06:35:59	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:25:42.360    06:35:59	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:25:42.360     06:35:59	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:25:42.360    06:35:59	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:25:42.360    06:35:59	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:25:42.360    06:35:59	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:25:42.360    06:35:59	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:25:42.360    06:35:59	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:25:42.360    06:35:59	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:25:42.360     06:35:59	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:25:42.360     06:35:59	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:25:42.360     06:35:59	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:25:42.360      06:35:59	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:42.360      06:35:59	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:42.360      06:35:59	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:42.360      06:35:59	-- paths/export.sh@5 -- # export PATH
00:25:42.360      06:35:59	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:42.360    06:35:59	-- nvmf/common.sh@46 -- # : 0
00:25:42.360    06:35:59	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:25:42.360    06:35:59	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:25:42.360    06:35:59	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:25:42.360    06:35:59	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:25:42.360    06:35:59	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:25:42.360    06:35:59	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:25:42.360    06:35:59	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:25:42.360    06:35:59	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:25:42.360   06:35:59	-- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:25:42.360    06:35:59	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:25:42.360    06:35:59	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:25:42.360    06:35:59	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:25:42.360     06:35:59	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:42.360     06:35:59	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:42.360     06:35:59	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:42.360     06:35:59	-- paths/export.sh@5 -- # export PATH
00:25:42.360     06:35:59	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:42.360   06:35:59	-- target/identify_passthru.sh@12 -- # nvmftestinit
00:25:42.360   06:35:59	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:25:42.360   06:35:59	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:25:42.360   06:35:59	-- nvmf/common.sh@436 -- # prepare_net_devs
00:25:42.360   06:35:59	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:25:42.360   06:35:59	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:25:42.360   06:35:59	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:25:42.360   06:35:59	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null'
00:25:42.360    06:35:59	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:25:42.360   06:35:59	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:25:42.360   06:35:59	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:25:42.360   06:35:59	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:25:42.360   06:35:59	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:25:42.360   06:35:59	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:25:42.360   06:35:59	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:25:42.360   06:35:59	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:25:42.360   06:35:59	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:25:42.360   06:35:59	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:25:42.360   06:35:59	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:25:42.360   06:35:59	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:25:42.360   06:35:59	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:25:42.360   06:35:59	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:25:42.360   06:35:59	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:25:42.360   06:35:59	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:25:42.360   06:35:59	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:25:42.360   06:35:59	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:25:42.360   06:35:59	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:25:42.360   06:35:59	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:25:42.360   06:35:59	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:25:42.619  Cannot find device "nvmf_tgt_br"
00:25:42.619   06:35:59	-- nvmf/common.sh@154 -- # true
00:25:42.619   06:35:59	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:25:42.619  Cannot find device "nvmf_tgt_br2"
00:25:42.619   06:35:59	-- nvmf/common.sh@155 -- # true
00:25:42.619   06:35:59	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:25:42.619   06:35:59	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:25:42.619  Cannot find device "nvmf_tgt_br"
00:25:42.619   06:35:59	-- nvmf/common.sh@157 -- # true
00:25:42.619   06:35:59	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:25:42.619  Cannot find device "nvmf_tgt_br2"
00:25:42.619   06:35:59	-- nvmf/common.sh@158 -- # true
00:25:42.619   06:35:59	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:25:42.619   06:35:59	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:25:42.619   06:35:59	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:25:42.619  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:25:42.619   06:35:59	-- nvmf/common.sh@161 -- # true
00:25:42.619   06:35:59	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:25:42.619  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:25:42.619   06:35:59	-- nvmf/common.sh@162 -- # true
00:25:42.619   06:35:59	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:25:42.619   06:35:59	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:25:42.619   06:35:59	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:25:42.619   06:35:59	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:25:42.619   06:35:59	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:25:42.619   06:35:59	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:25:42.619   06:35:59	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:25:42.619   06:35:59	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:25:42.619   06:35:59	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:25:42.619   06:35:59	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:25:42.619   06:35:59	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:25:42.619   06:35:59	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:25:42.619   06:35:59	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:25:42.619   06:35:59	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:25:42.619   06:35:59	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:25:42.619   06:35:59	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:25:42.619   06:35:59	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:25:42.619   06:35:59	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:25:42.619   06:35:59	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:25:42.619   06:35:59	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:25:42.878   06:35:59	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:25:42.878   06:35:59	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:25:42.878   06:35:59	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:25:42.878   06:35:59	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:25:42.878  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:25:42.878  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms
00:25:42.878  
00:25:42.878  --- 10.0.0.2 ping statistics ---
00:25:42.878  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:42.878  rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms
00:25:42.878   06:35:59	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:25:42.878  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:25:42.878  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms
00:25:42.878  
00:25:42.878  --- 10.0.0.3 ping statistics ---
00:25:42.878  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:42.878  rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms
00:25:42.878   06:35:59	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:25:42.878  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:25:42.878  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms
00:25:42.878  
00:25:42.878  --- 10.0.0.1 ping statistics ---
00:25:42.878  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:42.878  rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms
00:25:42.878   06:35:59	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:25:42.878   06:35:59	-- nvmf/common.sh@421 -- # return 0
00:25:42.878   06:35:59	-- nvmf/common.sh@438 -- # '[' '' == iso ']'
00:25:42.878   06:35:59	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:25:42.878   06:35:59	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:25:42.878   06:35:59	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:25:42.878   06:35:59	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:25:42.879   06:35:59	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:25:42.879   06:35:59	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:25:42.879   06:35:59	-- target/identify_passthru.sh@14 -- # timing_enter nvme_identify
00:25:42.879   06:35:59	-- common/autotest_common.sh@722 -- # xtrace_disable
00:25:42.879   06:35:59	-- common/autotest_common.sh@10 -- # set +x
00:25:42.879    06:35:59	-- target/identify_passthru.sh@16 -- # get_first_nvme_bdf
00:25:42.879    06:35:59	-- common/autotest_common.sh@1519 -- # bdfs=()
00:25:42.879    06:35:59	-- common/autotest_common.sh@1519 -- # local bdfs
00:25:42.879    06:35:59	-- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:25:42.879     06:35:59	-- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:25:42.879     06:35:59	-- common/autotest_common.sh@1508 -- # bdfs=()
00:25:42.879     06:35:59	-- common/autotest_common.sh@1508 -- # local bdfs
00:25:42.879     06:35:59	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:25:42.879      06:35:59	-- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:25:42.879      06:35:59	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:25:42.879     06:35:59	-- common/autotest_common.sh@1510 -- # (( 2 == 0 ))
00:25:42.879     06:35:59	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0
00:25:42.879    06:35:59	-- common/autotest_common.sh@1522 -- # echo 0000:00:06.0
00:25:42.879   06:35:59	-- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0
00:25:42.879   06:35:59	-- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']'
00:25:42.879    06:35:59	-- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0
00:25:42.879    06:35:59	-- target/identify_passthru.sh@23 -- # grep 'Serial Number:'
00:25:42.879    06:35:59	-- target/identify_passthru.sh@23 -- # awk '{print $3}'
00:25:43.137   06:35:59	-- target/identify_passthru.sh@23 -- # nvme_serial_number=12340
00:25:43.137    06:35:59	-- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0
00:25:43.137    06:35:59	-- target/identify_passthru.sh@24 -- # awk '{print $3}'
00:25:43.137    06:35:59	-- target/identify_passthru.sh@24 -- # grep 'Model Number:'
00:25:43.137   06:36:00	-- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU
00:25:43.137   06:36:00	-- target/identify_passthru.sh@26 -- # timing_exit nvme_identify
00:25:43.137   06:36:00	-- common/autotest_common.sh@728 -- # xtrace_disable
00:25:43.137   06:36:00	-- common/autotest_common.sh@10 -- # set +x
00:25:43.395   06:36:00	-- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt
00:25:43.395   06:36:00	-- common/autotest_common.sh@722 -- # xtrace_disable
00:25:43.395   06:36:00	-- common/autotest_common.sh@10 -- # set +x
00:25:43.395   06:36:00	-- target/identify_passthru.sh@31 -- # nvmfpid=90996
00:25:43.396   06:36:00	-- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc
00:25:43.396   06:36:00	-- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:25:43.396   06:36:00	-- target/identify_passthru.sh@35 -- # waitforlisten 90996
00:25:43.396   06:36:00	-- common/autotest_common.sh@829 -- # '[' -z 90996 ']'
00:25:43.396   06:36:00	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:25:43.396   06:36:00	-- common/autotest_common.sh@834 -- # local max_retries=100
00:25:43.396  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:25:43.396   06:36:00	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:25:43.396   06:36:00	-- common/autotest_common.sh@838 -- # xtrace_disable
00:25:43.396   06:36:00	-- common/autotest_common.sh@10 -- # set +x
00:25:43.396  [2024-12-16 06:36:00.196921] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:25:43.396  [2024-12-16 06:36:00.197026] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:25:43.396  [2024-12-16 06:36:00.341345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:25:43.654  [2024-12-16 06:36:00.458416] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:25:43.654  [2024-12-16 06:36:00.458618] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:25:43.654  [2024-12-16 06:36:00.458638] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:25:43.654  [2024-12-16 06:36:00.458650] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:25:43.654  [2024-12-16 06:36:00.458800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:25:43.654  [2024-12-16 06:36:00.458962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:25:43.654  [2024-12-16 06:36:00.459542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:25:43.654  [2024-12-16 06:36:00.459553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:44.225   06:36:01	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:25:44.225   06:36:01	-- common/autotest_common.sh@862 -- # return 0
00:25:44.225   06:36:01	-- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr
00:25:44.225   06:36:01	-- common/autotest_common.sh@561 -- # xtrace_disable
00:25:44.225   06:36:01	-- common/autotest_common.sh@10 -- # set +x
00:25:44.225   06:36:01	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:25:44.225   06:36:01	-- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init
00:25:44.225   06:36:01	-- common/autotest_common.sh@561 -- # xtrace_disable
00:25:44.225   06:36:01	-- common/autotest_common.sh@10 -- # set +x
00:25:44.484  [2024-12-16 06:36:01.269956] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled
00:25:44.484   06:36:01	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:25:44.484   06:36:01	-- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:25:44.484   06:36:01	-- common/autotest_common.sh@561 -- # xtrace_disable
00:25:44.484   06:36:01	-- common/autotest_common.sh@10 -- # set +x
00:25:44.484  [2024-12-16 06:36:01.283957] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:25:44.484   06:36:01	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:25:44.484   06:36:01	-- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt
00:25:44.484   06:36:01	-- common/autotest_common.sh@728 -- # xtrace_disable
00:25:44.484   06:36:01	-- common/autotest_common.sh@10 -- # set +x
00:25:44.484   06:36:01	-- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0
00:25:44.484   06:36:01	-- common/autotest_common.sh@561 -- # xtrace_disable
00:25:44.484   06:36:01	-- common/autotest_common.sh@10 -- # set +x
00:25:44.484  Nvme0n1
00:25:44.484   06:36:01	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:25:44.484   06:36:01	-- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1
00:25:44.484   06:36:01	-- common/autotest_common.sh@561 -- # xtrace_disable
00:25:44.484   06:36:01	-- common/autotest_common.sh@10 -- # set +x
00:25:44.484   06:36:01	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:25:44.484   06:36:01	-- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1
00:25:44.484   06:36:01	-- common/autotest_common.sh@561 -- # xtrace_disable
00:25:44.484   06:36:01	-- common/autotest_common.sh@10 -- # set +x
00:25:44.484   06:36:01	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:25:44.484   06:36:01	-- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:25:44.484   06:36:01	-- common/autotest_common.sh@561 -- # xtrace_disable
00:25:44.484   06:36:01	-- common/autotest_common.sh@10 -- # set +x
00:25:44.484  [2024-12-16 06:36:01.416352] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:25:44.484   06:36:01	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:25:44.484   06:36:01	-- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems
00:25:44.484   06:36:01	-- common/autotest_common.sh@561 -- # xtrace_disable
00:25:44.484   06:36:01	-- common/autotest_common.sh@10 -- # set +x
00:25:44.484  [2024-12-16 06:36:01.424153] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05
00:25:44.484  [
00:25:44.484  {
00:25:44.484  "allow_any_host": true,
00:25:44.484  "hosts": [],
00:25:44.484  "listen_addresses": [],
00:25:44.484  "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:25:44.484  "subtype": "Discovery"
00:25:44.484  },
00:25:44.484  {
00:25:44.484  "allow_any_host": true,
00:25:44.484  "hosts": [],
00:25:44.484  "listen_addresses": [
00:25:44.484  {
00:25:44.484  "adrfam": "IPv4",
00:25:44.484  "traddr": "10.0.0.2",
00:25:44.484  "transport": "TCP",
00:25:44.484  "trsvcid": "4420",
00:25:44.484  "trtype": "TCP"
00:25:44.484  }
00:25:44.484  ],
00:25:44.484  "max_cntlid": 65519,
00:25:44.484  "max_namespaces": 1,
00:25:44.484  "min_cntlid": 1,
00:25:44.484  "model_number": "SPDK bdev Controller",
00:25:44.484  "namespaces": [
00:25:44.484  {
00:25:44.484  "bdev_name": "Nvme0n1",
00:25:44.484  "name": "Nvme0n1",
00:25:44.484  "nguid": "A7B626D5C63049F38175CD1165DEF65C",
00:25:44.484  "nsid": 1,
00:25:44.484  "uuid": "a7b626d5-c630-49f3-8175-cd1165def65c"
00:25:44.484  }
00:25:44.484  ],
00:25:44.484  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:25:44.484  "serial_number": "SPDK00000000000001",
00:25:44.484  "subtype": "NVMe"
00:25:44.484  }
00:25:44.484  ]
00:25:44.484   06:36:01	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:25:44.484    06:36:01	-- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r '        trtype:tcp         adrfam:IPv4         traddr:10.0.0.2         trsvcid:4420         subnqn:nqn.2016-06.io.spdk:cnode1'
00:25:44.484    06:36:01	-- target/identify_passthru.sh@54 -- # awk '{print $3}'
00:25:44.484    06:36:01	-- target/identify_passthru.sh@54 -- # grep 'Serial Number:'
00:25:44.743   06:36:01	-- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340
00:25:44.743    06:36:01	-- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r '        trtype:tcp         adrfam:IPv4         traddr:10.0.0.2         trsvcid:4420         subnqn:nqn.2016-06.io.spdk:cnode1'
00:25:44.743    06:36:01	-- target/identify_passthru.sh@61 -- # grep 'Model Number:'
00:25:44.743    06:36:01	-- target/identify_passthru.sh@61 -- # awk '{print $3}'
00:25:45.001   06:36:01	-- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU
00:25:45.001   06:36:01	-- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']'
00:25:45.001   06:36:01	-- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']'
00:25:45.001   06:36:01	-- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:25:45.001   06:36:01	-- common/autotest_common.sh@561 -- # xtrace_disable
00:25:45.001   06:36:01	-- common/autotest_common.sh@10 -- # set +x
00:25:45.001   06:36:01	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:25:45.001   06:36:01	-- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT
00:25:45.001   06:36:01	-- target/identify_passthru.sh@77 -- # nvmftestfini
00:25:45.001   06:36:01	-- nvmf/common.sh@476 -- # nvmfcleanup
00:25:45.001   06:36:01	-- nvmf/common.sh@116 -- # sync
00:25:45.001   06:36:01	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:25:45.001   06:36:01	-- nvmf/common.sh@119 -- # set +e
00:25:45.001   06:36:01	-- nvmf/common.sh@120 -- # for i in {1..20}
00:25:45.001   06:36:01	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:25:45.001  rmmod nvme_tcp
00:25:45.001  rmmod nvme_fabrics
00:25:45.261  rmmod nvme_keyring
00:25:45.261   06:36:02	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:25:45.261   06:36:02	-- nvmf/common.sh@123 -- # set -e
00:25:45.261   06:36:02	-- nvmf/common.sh@124 -- # return 0
00:25:45.261   06:36:02	-- nvmf/common.sh@477 -- # '[' -n 90996 ']'
00:25:45.261   06:36:02	-- nvmf/common.sh@478 -- # killprocess 90996
00:25:45.261   06:36:02	-- common/autotest_common.sh@936 -- # '[' -z 90996 ']'
00:25:45.261   06:36:02	-- common/autotest_common.sh@940 -- # kill -0 90996
00:25:45.261    06:36:02	-- common/autotest_common.sh@941 -- # uname
00:25:45.261   06:36:02	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:25:45.261    06:36:02	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90996
00:25:45.261   06:36:02	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:25:45.261   06:36:02	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:25:45.261  killing process with pid 90996
00:25:45.261   06:36:02	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 90996'
00:25:45.261   06:36:02	-- common/autotest_common.sh@955 -- # kill 90996
00:25:45.261  [2024-12-16 06:36:02.051149] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times
00:25:45.261   06:36:02	-- common/autotest_common.sh@960 -- # wait 90996
00:25:45.520   06:36:02	-- nvmf/common.sh@480 -- # '[' '' == iso ']'
00:25:45.520   06:36:02	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:25:45.520   06:36:02	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:25:45.520   06:36:02	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:25:45.520   06:36:02	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:25:45.520   06:36:02	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:25:45.520   06:36:02	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null'
00:25:45.520    06:36:02	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:25:45.520   06:36:02	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:25:45.520  
00:25:45.520  real	0m3.224s
00:25:45.520  user	0m7.754s
00:25:45.520  sys	0m0.856s
00:25:45.520   06:36:02	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:45.520  ************************************
00:25:45.520  END TEST nvmf_identify_passthru
00:25:45.520   06:36:02	-- common/autotest_common.sh@10 -- # set +x
00:25:45.520  ************************************
00:25:45.520   06:36:02	-- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh
00:25:45.520   06:36:02	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:25:45.520   06:36:02	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:45.520   06:36:02	-- common/autotest_common.sh@10 -- # set +x
00:25:45.520  ************************************
00:25:45.520  START TEST nvmf_dif
00:25:45.520  ************************************
00:25:45.520   06:36:02	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh
00:25:45.520  * Looking for test storage...
00:25:45.520  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:25:45.520    06:36:02	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:25:45.520     06:36:02	-- common/autotest_common.sh@1690 -- # lcov --version
00:25:45.520     06:36:02	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:25:45.779    06:36:02	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:25:45.779    06:36:02	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:25:45.779    06:36:02	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:25:45.779    06:36:02	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:25:45.779    06:36:02	-- scripts/common.sh@335 -- # IFS=.-:
00:25:45.779    06:36:02	-- scripts/common.sh@335 -- # read -ra ver1
00:25:45.779    06:36:02	-- scripts/common.sh@336 -- # IFS=.-:
00:25:45.779    06:36:02	-- scripts/common.sh@336 -- # read -ra ver2
00:25:45.779    06:36:02	-- scripts/common.sh@337 -- # local 'op=<'
00:25:45.779    06:36:02	-- scripts/common.sh@339 -- # ver1_l=2
00:25:45.779    06:36:02	-- scripts/common.sh@340 -- # ver2_l=1
00:25:45.779    06:36:02	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:25:45.779    06:36:02	-- scripts/common.sh@343 -- # case "$op" in
00:25:45.779    06:36:02	-- scripts/common.sh@344 -- # : 1
00:25:45.779    06:36:02	-- scripts/common.sh@363 -- # (( v = 0 ))
00:25:45.779    06:36:02	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:45.779     06:36:02	-- scripts/common.sh@364 -- # decimal 1
00:25:45.779     06:36:02	-- scripts/common.sh@352 -- # local d=1
00:25:45.779     06:36:02	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:45.779     06:36:02	-- scripts/common.sh@354 -- # echo 1
00:25:45.779    06:36:02	-- scripts/common.sh@364 -- # ver1[v]=1
00:25:45.779     06:36:02	-- scripts/common.sh@365 -- # decimal 2
00:25:45.779     06:36:02	-- scripts/common.sh@352 -- # local d=2
00:25:45.779     06:36:02	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:45.779     06:36:02	-- scripts/common.sh@354 -- # echo 2
00:25:45.779    06:36:02	-- scripts/common.sh@365 -- # ver2[v]=2
00:25:45.779    06:36:02	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:25:45.779    06:36:02	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:25:45.779    06:36:02	-- scripts/common.sh@367 -- # return 0
00:25:45.779    06:36:02	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:45.779    06:36:02	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:25:45.779  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:45.779  		--rc genhtml_branch_coverage=1
00:25:45.779  		--rc genhtml_function_coverage=1
00:25:45.779  		--rc genhtml_legend=1
00:25:45.779  		--rc geninfo_all_blocks=1
00:25:45.779  		--rc geninfo_unexecuted_blocks=1
00:25:45.779  		
00:25:45.779  		'
00:25:45.779    06:36:02	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:25:45.779  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:45.779  		--rc genhtml_branch_coverage=1
00:25:45.779  		--rc genhtml_function_coverage=1
00:25:45.779  		--rc genhtml_legend=1
00:25:45.779  		--rc geninfo_all_blocks=1
00:25:45.779  		--rc geninfo_unexecuted_blocks=1
00:25:45.779  		
00:25:45.779  		'
00:25:45.779    06:36:02	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:25:45.779  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:45.779  		--rc genhtml_branch_coverage=1
00:25:45.779  		--rc genhtml_function_coverage=1
00:25:45.779  		--rc genhtml_legend=1
00:25:45.779  		--rc geninfo_all_blocks=1
00:25:45.779  		--rc geninfo_unexecuted_blocks=1
00:25:45.779  		
00:25:45.779  		'
00:25:45.779    06:36:02	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:25:45.779  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:45.779  		--rc genhtml_branch_coverage=1
00:25:45.779  		--rc genhtml_function_coverage=1
00:25:45.779  		--rc genhtml_legend=1
00:25:45.779  		--rc geninfo_all_blocks=1
00:25:45.779  		--rc geninfo_unexecuted_blocks=1
00:25:45.779  		
00:25:45.779  		'
00:25:45.780   06:36:02	-- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:25:45.780     06:36:02	-- nvmf/common.sh@7 -- # uname -s
00:25:45.780    06:36:02	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:25:45.780    06:36:02	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:25:45.780    06:36:02	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:25:45.780    06:36:02	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:25:45.780    06:36:02	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:25:45.780    06:36:02	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:25:45.780    06:36:02	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:25:45.780    06:36:02	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:25:45.780    06:36:02	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:25:45.780     06:36:02	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:25:45.780    06:36:02	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:25:45.780    06:36:02	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:25:45.780    06:36:02	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:25:45.780    06:36:02	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:25:45.780    06:36:02	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:25:45.780    06:36:02	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:25:45.780     06:36:02	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:25:45.780     06:36:02	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:25:45.780     06:36:02	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:25:45.780      06:36:02	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:45.780      06:36:02	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:45.780      06:36:02	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:45.780      06:36:02	-- paths/export.sh@5 -- # export PATH
00:25:45.780      06:36:02	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:45.780    06:36:02	-- nvmf/common.sh@46 -- # : 0
00:25:45.780    06:36:02	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:25:45.780    06:36:02	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:25:45.780    06:36:02	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:25:45.780    06:36:02	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:25:45.780    06:36:02	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:25:45.780    06:36:02	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:25:45.780    06:36:02	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:25:45.780    06:36:02	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:25:45.780   06:36:02	-- target/dif.sh@15 -- # NULL_META=16
00:25:45.780   06:36:02	-- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512
00:25:45.780   06:36:02	-- target/dif.sh@15 -- # NULL_SIZE=64
00:25:45.780   06:36:02	-- target/dif.sh@15 -- # NULL_DIF=1
00:25:45.780   06:36:02	-- target/dif.sh@135 -- # nvmftestinit
00:25:45.780   06:36:02	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:25:45.780   06:36:02	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:25:45.780   06:36:02	-- nvmf/common.sh@436 -- # prepare_net_devs
00:25:45.780   06:36:02	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:25:45.780   06:36:02	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:25:45.780   06:36:02	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:25:45.780   06:36:02	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null'
00:25:45.780    06:36:02	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:25:45.780   06:36:02	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:25:45.780   06:36:02	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:25:45.780   06:36:02	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:25:45.780   06:36:02	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:25:45.780   06:36:02	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:25:45.780   06:36:02	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:25:45.780   06:36:02	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:25:45.780   06:36:02	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:25:45.780   06:36:02	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:25:45.780   06:36:02	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:25:45.780   06:36:02	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:25:45.780   06:36:02	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:25:45.780   06:36:02	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:25:45.780   06:36:02	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:25:45.780   06:36:02	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:25:45.780   06:36:02	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:25:45.780   06:36:02	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:25:45.780   06:36:02	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:25:45.780   06:36:02	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:25:45.780   06:36:02	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:25:45.780  Cannot find device "nvmf_tgt_br"
00:25:45.780   06:36:02	-- nvmf/common.sh@154 -- # true
00:25:45.780   06:36:02	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:25:45.780  Cannot find device "nvmf_tgt_br2"
00:25:45.780   06:36:02	-- nvmf/common.sh@155 -- # true
00:25:45.780   06:36:02	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:25:45.780   06:36:02	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:25:45.780  Cannot find device "nvmf_tgt_br"
00:25:45.780   06:36:02	-- nvmf/common.sh@157 -- # true
00:25:45.780   06:36:02	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:25:45.780  Cannot find device "nvmf_tgt_br2"
00:25:45.780   06:36:02	-- nvmf/common.sh@158 -- # true
00:25:45.780   06:36:02	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:25:45.780   06:36:02	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:25:45.780   06:36:02	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:25:46.039  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:25:46.039   06:36:02	-- nvmf/common.sh@161 -- # true
00:25:46.039   06:36:02	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:25:46.039  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:25:46.039   06:36:02	-- nvmf/common.sh@162 -- # true
00:25:46.039   06:36:02	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:25:46.039   06:36:02	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:25:46.039   06:36:02	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:25:46.039   06:36:02	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:25:46.039   06:36:02	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:25:46.039   06:36:02	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:25:46.039   06:36:02	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:25:46.039   06:36:02	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:25:46.039   06:36:02	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:25:46.039   06:36:02	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:25:46.039   06:36:02	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:25:46.039   06:36:02	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:25:46.039   06:36:02	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:25:46.039   06:36:02	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:25:46.039   06:36:02	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:25:46.039   06:36:02	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:25:46.039   06:36:02	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:25:46.039   06:36:02	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:25:46.039   06:36:02	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:25:46.039   06:36:02	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:25:46.040   06:36:02	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:25:46.040   06:36:02	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:25:46.040   06:36:02	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:25:46.040   06:36:02	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:25:46.040  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:25:46.040  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms
00:25:46.040  
00:25:46.040  --- 10.0.0.2 ping statistics ---
00:25:46.040  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:46.040  rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms
00:25:46.040   06:36:02	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:25:46.040  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:25:46.040  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms
00:25:46.040  
00:25:46.040  --- 10.0.0.3 ping statistics ---
00:25:46.040  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:46.040  rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms
00:25:46.040   06:36:02	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:25:46.040  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:25:46.040  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms
00:25:46.040  
00:25:46.040  --- 10.0.0.1 ping statistics ---
00:25:46.040  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:46.040  rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms
00:25:46.040   06:36:02	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:25:46.040   06:36:02	-- nvmf/common.sh@421 -- # return 0
00:25:46.040   06:36:02	-- nvmf/common.sh@438 -- # '[' iso == iso ']'
00:25:46.040   06:36:02	-- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:25:46.607  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:25:46.607  0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver
00:25:46.607  0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver
00:25:46.607   06:36:03	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:25:46.607   06:36:03	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:25:46.607   06:36:03	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:25:46.607   06:36:03	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:25:46.607   06:36:03	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:25:46.607   06:36:03	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:25:46.607   06:36:03	-- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip'
00:25:46.607   06:36:03	-- target/dif.sh@137 -- # nvmfappstart
00:25:46.607   06:36:03	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:25:46.607   06:36:03	-- common/autotest_common.sh@722 -- # xtrace_disable
00:25:46.607   06:36:03	-- common/autotest_common.sh@10 -- # set +x
00:25:46.607   06:36:03	-- nvmf/common.sh@469 -- # nvmfpid=91350
00:25:46.607   06:36:03	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF
00:25:46.607   06:36:03	-- nvmf/common.sh@470 -- # waitforlisten 91350
00:25:46.607   06:36:03	-- common/autotest_common.sh@829 -- # '[' -z 91350 ']'
00:25:46.607   06:36:03	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:25:46.607   06:36:03	-- common/autotest_common.sh@834 -- # local max_retries=100
00:25:46.607  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:25:46.607   06:36:03	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:25:46.607   06:36:03	-- common/autotest_common.sh@838 -- # xtrace_disable
00:25:46.607   06:36:03	-- common/autotest_common.sh@10 -- # set +x
00:25:46.607  [2024-12-16 06:36:03.455068] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:25:46.607  [2024-12-16 06:36:03.455156] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:25:46.866  [2024-12-16 06:36:03.597914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:46.866  [2024-12-16 06:36:03.706341] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:25:46.866  [2024-12-16 06:36:03.706545] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:25:46.866  [2024-12-16 06:36:03.706578] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:25:46.866  [2024-12-16 06:36:03.706597] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:25:46.866  [2024-12-16 06:36:03.706647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:47.802   06:36:04	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:25:47.802   06:36:04	-- common/autotest_common.sh@862 -- # return 0
00:25:47.802   06:36:04	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:25:47.802   06:36:04	-- common/autotest_common.sh@728 -- # xtrace_disable
00:25:47.802   06:36:04	-- common/autotest_common.sh@10 -- # set +x
00:25:47.802   06:36:04	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:25:47.802   06:36:04	-- target/dif.sh@139 -- # create_transport
00:25:47.802   06:36:04	-- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip
00:25:47.802   06:36:04	-- common/autotest_common.sh@561 -- # xtrace_disable
00:25:47.802   06:36:04	-- common/autotest_common.sh@10 -- # set +x
00:25:47.802  [2024-12-16 06:36:04.551589] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:25:47.802   06:36:04	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:25:47.802   06:36:04	-- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1
00:25:47.802   06:36:04	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:25:47.802   06:36:04	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:47.802   06:36:04	-- common/autotest_common.sh@10 -- # set +x
00:25:47.802  ************************************
00:25:47.802  START TEST fio_dif_1_default
00:25:47.802  ************************************
00:25:47.802   06:36:04	-- common/autotest_common.sh@1114 -- # fio_dif_1
00:25:47.802   06:36:04	-- target/dif.sh@86 -- # create_subsystems 0
00:25:47.802   06:36:04	-- target/dif.sh@28 -- # local sub
00:25:47.802   06:36:04	-- target/dif.sh@30 -- # for sub in "$@"
00:25:47.802   06:36:04	-- target/dif.sh@31 -- # create_subsystem 0
00:25:47.802   06:36:04	-- target/dif.sh@18 -- # local sub_id=0
00:25:47.802   06:36:04	-- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1
00:25:47.802   06:36:04	-- common/autotest_common.sh@561 -- # xtrace_disable
00:25:47.802   06:36:04	-- common/autotest_common.sh@10 -- # set +x
00:25:47.802  bdev_null0
00:25:47.802   06:36:04	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:25:47.802   06:36:04	-- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host
00:25:47.802   06:36:04	-- common/autotest_common.sh@561 -- # xtrace_disable
00:25:47.802   06:36:04	-- common/autotest_common.sh@10 -- # set +x
00:25:47.802   06:36:04	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:25:47.802   06:36:04	-- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0
00:25:47.802   06:36:04	-- common/autotest_common.sh@561 -- # xtrace_disable
00:25:47.802   06:36:04	-- common/autotest_common.sh@10 -- # set +x
00:25:47.802   06:36:04	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:25:47.802   06:36:04	-- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:25:47.802   06:36:04	-- common/autotest_common.sh@561 -- # xtrace_disable
00:25:47.803   06:36:04	-- common/autotest_common.sh@10 -- # set +x
00:25:47.803  [2024-12-16 06:36:04.599661] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:25:47.803   06:36:04	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:25:47.803   06:36:04	-- target/dif.sh@87 -- # fio /dev/fd/62
00:25:47.803    06:36:04	-- target/dif.sh@87 -- # create_json_sub_conf 0
00:25:47.803    06:36:04	-- target/dif.sh@51 -- # gen_nvmf_target_json 0
00:25:47.803    06:36:04	-- nvmf/common.sh@520 -- # config=()
00:25:47.803    06:36:04	-- nvmf/common.sh@520 -- # local subsystem config
00:25:47.803   06:36:04	-- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:25:47.803    06:36:04	-- nvmf/common.sh@522 -- # for subsystem in "${@:-1}"
00:25:47.803    06:36:04	-- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF
00:25:47.803  {
00:25:47.803    "params": {
00:25:47.803      "name": "Nvme$subsystem",
00:25:47.803      "trtype": "$TEST_TRANSPORT",
00:25:47.803      "traddr": "$NVMF_FIRST_TARGET_IP",
00:25:47.803      "adrfam": "ipv4",
00:25:47.803      "trsvcid": "$NVMF_PORT",
00:25:47.803      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:25:47.803      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:25:47.803      "hdgst": ${hdgst:-false},
00:25:47.803      "ddgst": ${ddgst:-false}
00:25:47.803    },
00:25:47.803    "method": "bdev_nvme_attach_controller"
00:25:47.803  }
00:25:47.803  EOF
00:25:47.803  )")
00:25:47.803    06:36:04	-- target/dif.sh@82 -- # gen_fio_conf
00:25:47.803   06:36:04	-- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:25:47.803    06:36:04	-- target/dif.sh@54 -- # local file
00:25:47.803    06:36:04	-- target/dif.sh@56 -- # cat
00:25:47.803   06:36:04	-- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio
00:25:47.803   06:36:04	-- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:25:47.803   06:36:04	-- common/autotest_common.sh@1328 -- # local sanitizers
00:25:47.803   06:36:04	-- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:25:47.803     06:36:04	-- nvmf/common.sh@542 -- # cat
00:25:47.803   06:36:04	-- common/autotest_common.sh@1330 -- # shift
00:25:47.803   06:36:04	-- common/autotest_common.sh@1332 -- # local asan_lib=
00:25:47.803   06:36:04	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:25:47.803    06:36:04	-- target/dif.sh@72 -- # (( file = 1 ))
00:25:47.803    06:36:04	-- target/dif.sh@72 -- # (( file <= files ))
00:25:47.803    06:36:04	-- common/autotest_common.sh@1334 -- # grep libasan
00:25:47.803    06:36:04	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:25:47.803    06:36:04	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:25:47.803    06:36:04	-- nvmf/common.sh@544 -- # jq .
00:25:47.803     06:36:04	-- nvmf/common.sh@545 -- # IFS=,
00:25:47.803     06:36:04	-- nvmf/common.sh@546 -- # printf '%s\n' '{
00:25:47.803    "params": {
00:25:47.803      "name": "Nvme0",
00:25:47.803      "trtype": "tcp",
00:25:47.803      "traddr": "10.0.0.2",
00:25:47.803      "adrfam": "ipv4",
00:25:47.803      "trsvcid": "4420",
00:25:47.803      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:25:47.803      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:25:47.803      "hdgst": false,
00:25:47.803      "ddgst": false
00:25:47.803    },
00:25:47.803    "method": "bdev_nvme_attach_controller"
00:25:47.803  }'
00:25:47.803   06:36:04	-- common/autotest_common.sh@1334 -- # asan_lib=
00:25:47.803   06:36:04	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:25:47.803   06:36:04	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:25:47.803    06:36:04	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:25:47.803    06:36:04	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:25:47.803    06:36:04	-- common/autotest_common.sh@1334 -- # grep libclang_rt.asan
00:25:47.803   06:36:04	-- common/autotest_common.sh@1334 -- # asan_lib=
00:25:47.803   06:36:04	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:25:47.803   06:36:04	-- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:25:47.803   06:36:04	-- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:25:48.062  filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4
00:25:48.062  fio-3.35
00:25:48.062  Starting 1 thread
00:25:48.329  [2024-12-16 06:36:05.275561] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another.
00:25:48.329  [2024-12-16 06:36:05.275803] rpc.c:  90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock
00:26:00.578  
00:26:00.579  filename0: (groupid=0, jobs=1): err= 0: pid=91440: Mon Dec 16 06:36:15 2024
00:26:00.579    read: IOPS=2581, BW=10.1MiB/s (10.6MB/s)(101MiB/10031msec)
00:26:00.579      slat (nsec): min=5859, max=53856, avg=7198.40, stdev=2337.92
00:26:00.579      clat (usec): min=359, max=42492, avg=1527.29, stdev=6665.30
00:26:00.579       lat (usec): min=365, max=42507, avg=1534.49, stdev=6665.33
00:26:00.579      clat percentiles (usec):
00:26:00.579       |  1.00th=[  367],  5.00th=[  371], 10.00th=[  375], 20.00th=[  379],
00:26:00.579       | 30.00th=[  388], 40.00th=[  392], 50.00th=[  396], 60.00th=[  400],
00:26:00.579       | 70.00th=[  408], 80.00th=[  424], 90.00th=[  453], 95.00th=[  490],
00:26:00.579       | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206],
00:26:00.579       | 99.99th=[42730]
00:26:00.579     bw (  KiB/s): min= 5376, max=16096, per=100.00%, avg=10355.20, stdev=2558.66, samples=20
00:26:00.579     iops        : min= 1344, max= 4024, avg=2588.80, stdev=639.67, samples=20
00:26:00.579    lat (usec)   : 500=95.54%, 750=1.66%, 1000=0.02%
00:26:00.579    lat (msec)   : 10=0.02%, 50=2.77%
00:26:00.579    cpu          : usr=89.12%, sys=9.19%, ctx=35, majf=0, minf=9
00:26:00.579    IO depths    : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:26:00.579       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:00.579       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:00.579       issued rwts: total=25892,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:00.579       latency   : target=0, window=0, percentile=100.00%, depth=4
00:26:00.579  
00:26:00.579  Run status group 0 (all jobs):
00:26:00.579     READ: bw=10.1MiB/s (10.6MB/s), 10.1MiB/s-10.1MiB/s (10.6MB/s-10.6MB/s), io=101MiB (106MB), run=10031-10031msec
00:26:00.579   06:36:15	-- target/dif.sh@88 -- # destroy_subsystems 0
00:26:00.579   06:36:15	-- target/dif.sh@43 -- # local sub
00:26:00.579   06:36:15	-- target/dif.sh@45 -- # for sub in "$@"
00:26:00.579   06:36:15	-- target/dif.sh@46 -- # destroy_subsystem 0
00:26:00.579   06:36:15	-- target/dif.sh@36 -- # local sub_id=0
00:26:00.579   06:36:15	-- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:26:00.579   06:36:15	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:00.579   06:36:15	-- common/autotest_common.sh@10 -- # set +x
00:26:00.579   06:36:15	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:00.579   06:36:15	-- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0
00:26:00.579   06:36:15	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:00.579   06:36:15	-- common/autotest_common.sh@10 -- # set +x
00:26:00.579  ************************************
00:26:00.579  END TEST fio_dif_1_default
00:26:00.579  ************************************
00:26:00.579   06:36:15	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:00.579  
00:26:00.579  real	0m11.106s
00:26:00.579  user	0m9.637s
00:26:00.579  sys	0m1.207s
00:26:00.579   06:36:15	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:00.579   06:36:15	-- common/autotest_common.sh@10 -- # set +x
00:26:00.579   06:36:15	-- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems
00:26:00.579   06:36:15	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:00.579   06:36:15	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:00.579   06:36:15	-- common/autotest_common.sh@10 -- # set +x
00:26:00.579  ************************************
00:26:00.579  START TEST fio_dif_1_multi_subsystems
00:26:00.579  ************************************
00:26:00.579   06:36:15	-- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems
00:26:00.579   06:36:15	-- target/dif.sh@92 -- # local files=1
00:26:00.579   06:36:15	-- target/dif.sh@94 -- # create_subsystems 0 1
00:26:00.579   06:36:15	-- target/dif.sh@28 -- # local sub
00:26:00.579   06:36:15	-- target/dif.sh@30 -- # for sub in "$@"
00:26:00.579   06:36:15	-- target/dif.sh@31 -- # create_subsystem 0
00:26:00.579   06:36:15	-- target/dif.sh@18 -- # local sub_id=0
00:26:00.579   06:36:15	-- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1
00:26:00.579   06:36:15	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:00.579   06:36:15	-- common/autotest_common.sh@10 -- # set +x
00:26:00.579  bdev_null0
00:26:00.579   06:36:15	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:00.579   06:36:15	-- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host
00:26:00.579   06:36:15	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:00.579   06:36:15	-- common/autotest_common.sh@10 -- # set +x
00:26:00.579   06:36:15	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:00.579   06:36:15	-- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0
00:26:00.579   06:36:15	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:00.579   06:36:15	-- common/autotest_common.sh@10 -- # set +x
00:26:00.579   06:36:15	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:00.579   06:36:15	-- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:26:00.579   06:36:15	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:00.579   06:36:15	-- common/autotest_common.sh@10 -- # set +x
00:26:00.579  [2024-12-16 06:36:15.761781] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:26:00.579   06:36:15	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:00.579   06:36:15	-- target/dif.sh@30 -- # for sub in "$@"
00:26:00.579   06:36:15	-- target/dif.sh@31 -- # create_subsystem 1
00:26:00.579   06:36:15	-- target/dif.sh@18 -- # local sub_id=1
00:26:00.579   06:36:15	-- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1
00:26:00.579   06:36:15	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:00.579   06:36:15	-- common/autotest_common.sh@10 -- # set +x
00:26:00.579  bdev_null1
00:26:00.579   06:36:15	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:00.579   06:36:15	-- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host
00:26:00.579   06:36:15	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:00.579   06:36:15	-- common/autotest_common.sh@10 -- # set +x
00:26:00.579   06:36:15	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:00.579   06:36:15	-- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1
00:26:00.579   06:36:15	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:00.579   06:36:15	-- common/autotest_common.sh@10 -- # set +x
00:26:00.579   06:36:15	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:00.579   06:36:15	-- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:26:00.579   06:36:15	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:00.579   06:36:15	-- common/autotest_common.sh@10 -- # set +x
00:26:00.579   06:36:15	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:00.579   06:36:15	-- target/dif.sh@95 -- # fio /dev/fd/62
00:26:00.579    06:36:15	-- target/dif.sh@95 -- # create_json_sub_conf 0 1
00:26:00.579    06:36:15	-- target/dif.sh@51 -- # gen_nvmf_target_json 0 1
00:26:00.579    06:36:15	-- nvmf/common.sh@520 -- # config=()
00:26:00.579    06:36:15	-- nvmf/common.sh@520 -- # local subsystem config
00:26:00.579    06:36:15	-- nvmf/common.sh@522 -- # for subsystem in "${@:-1}"
00:26:00.579    06:36:15	-- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF
00:26:00.579  {
00:26:00.579    "params": {
00:26:00.579      "name": "Nvme$subsystem",
00:26:00.579      "trtype": "$TEST_TRANSPORT",
00:26:00.579      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:00.579      "adrfam": "ipv4",
00:26:00.579      "trsvcid": "$NVMF_PORT",
00:26:00.579      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:00.579      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:00.579      "hdgst": ${hdgst:-false},
00:26:00.579      "ddgst": ${ddgst:-false}
00:26:00.579    },
00:26:00.579    "method": "bdev_nvme_attach_controller"
00:26:00.579  }
00:26:00.579  EOF
00:26:00.579  )")
00:26:00.579   06:36:15	-- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:26:00.579    06:36:15	-- target/dif.sh@82 -- # gen_fio_conf
00:26:00.579   06:36:15	-- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:26:00.579    06:36:15	-- target/dif.sh@54 -- # local file
00:26:00.579    06:36:15	-- target/dif.sh@56 -- # cat
00:26:00.579   06:36:15	-- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio
00:26:00.579     06:36:15	-- nvmf/common.sh@542 -- # cat
00:26:00.579   06:36:15	-- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:26:00.579   06:36:15	-- common/autotest_common.sh@1328 -- # local sanitizers
00:26:00.579   06:36:15	-- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:26:00.579   06:36:15	-- common/autotest_common.sh@1330 -- # shift
00:26:00.579   06:36:15	-- common/autotest_common.sh@1332 -- # local asan_lib=
00:26:00.579   06:36:15	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:26:00.579    06:36:15	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:26:00.579    06:36:15	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:26:00.579    06:36:15	-- common/autotest_common.sh@1334 -- # grep libasan
00:26:00.579    06:36:15	-- target/dif.sh@72 -- # (( file = 1 ))
00:26:00.579    06:36:15	-- target/dif.sh@72 -- # (( file <= files ))
00:26:00.579    06:36:15	-- target/dif.sh@73 -- # cat
00:26:00.579    06:36:15	-- nvmf/common.sh@522 -- # for subsystem in "${@:-1}"
00:26:00.579    06:36:15	-- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF
00:26:00.579  {
00:26:00.579    "params": {
00:26:00.579      "name": "Nvme$subsystem",
00:26:00.579      "trtype": "$TEST_TRANSPORT",
00:26:00.579      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:00.579      "adrfam": "ipv4",
00:26:00.579      "trsvcid": "$NVMF_PORT",
00:26:00.579      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:00.579      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:00.579      "hdgst": ${hdgst:-false},
00:26:00.579      "ddgst": ${ddgst:-false}
00:26:00.579    },
00:26:00.579    "method": "bdev_nvme_attach_controller"
00:26:00.579  }
00:26:00.579  EOF
00:26:00.579  )")
00:26:00.579     06:36:15	-- nvmf/common.sh@542 -- # cat
00:26:00.579    06:36:15	-- target/dif.sh@72 -- # (( file++ ))
00:26:00.579    06:36:15	-- target/dif.sh@72 -- # (( file <= files ))
00:26:00.579    06:36:15	-- nvmf/common.sh@544 -- # jq .
00:26:00.579     06:36:15	-- nvmf/common.sh@545 -- # IFS=,
00:26:00.579     06:36:15	-- nvmf/common.sh@546 -- # printf '%s\n' '{
00:26:00.579    "params": {
00:26:00.579      "name": "Nvme0",
00:26:00.579      "trtype": "tcp",
00:26:00.579      "traddr": "10.0.0.2",
00:26:00.579      "adrfam": "ipv4",
00:26:00.579      "trsvcid": "4420",
00:26:00.579      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:26:00.579      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:26:00.579      "hdgst": false,
00:26:00.579      "ddgst": false
00:26:00.579    },
00:26:00.579    "method": "bdev_nvme_attach_controller"
00:26:00.579  },{
00:26:00.580    "params": {
00:26:00.580      "name": "Nvme1",
00:26:00.580      "trtype": "tcp",
00:26:00.580      "traddr": "10.0.0.2",
00:26:00.580      "adrfam": "ipv4",
00:26:00.580      "trsvcid": "4420",
00:26:00.580      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:26:00.580      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:26:00.580      "hdgst": false,
00:26:00.580      "ddgst": false
00:26:00.580    },
00:26:00.580    "method": "bdev_nvme_attach_controller"
00:26:00.580  }'
00:26:00.580   06:36:15	-- common/autotest_common.sh@1334 -- # asan_lib=
00:26:00.580   06:36:15	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:26:00.580   06:36:15	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:26:00.580    06:36:15	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:26:00.580    06:36:15	-- common/autotest_common.sh@1334 -- # grep libclang_rt.asan
00:26:00.580    06:36:15	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:26:00.580   06:36:15	-- common/autotest_common.sh@1334 -- # asan_lib=
00:26:00.580   06:36:15	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:26:00.580   06:36:15	-- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:26:00.580   06:36:15	-- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:26:00.580  filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4
00:26:00.580  filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4
00:26:00.580  fio-3.35
00:26:00.580  Starting 2 threads
00:26:00.580  [2024-12-16 06:36:16.567185] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another.
00:26:00.580  [2024-12-16 06:36:16.568968] rpc.c:  90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock
00:26:10.553  
00:26:10.553  filename0: (groupid=0, jobs=1): err= 0: pid=91600: Mon Dec 16 06:36:26 2024
00:26:10.553    read: IOPS=191, BW=766KiB/s (784kB/s)(7664KiB/10008msec)
00:26:10.553      slat (nsec): min=5960, max=49028, avg=8987.87, stdev=4971.45
00:26:10.553      clat (usec): min=354, max=42402, avg=20865.11, stdev=20238.03
00:26:10.553       lat (usec): min=361, max=42410, avg=20874.09, stdev=20237.96
00:26:10.553      clat percentiles (usec):
00:26:10.553       |  1.00th=[  363],  5.00th=[  375], 10.00th=[  383], 20.00th=[  392],
00:26:10.553       | 30.00th=[  408], 40.00th=[  433], 50.00th=[40633], 60.00th=[40633],
00:26:10.553       | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157],
00:26:10.553       | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206],
00:26:10.553       | 99.99th=[42206]
00:26:10.553     bw (  KiB/s): min=  576, max=  960, per=45.46%, avg=771.26, stdev=111.89, samples=19
00:26:10.553     iops        : min=  144, max=  240, avg=192.79, stdev=27.99, samples=19
00:26:10.553    lat (usec)   : 500=46.35%, 750=2.71%, 1000=0.21%
00:26:10.553    lat (msec)   : 2=0.21%, 50=50.52%
00:26:10.553    cpu          : usr=97.60%, sys=2.00%, ctx=17, majf=0, minf=0
00:26:10.553    IO depths    : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:26:10.553       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:10.553       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:10.553       issued rwts: total=1916,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:10.553       latency   : target=0, window=0, percentile=100.00%, depth=4
00:26:10.553  filename1: (groupid=0, jobs=1): err= 0: pid=91601: Mon Dec 16 06:36:26 2024
00:26:10.553    read: IOPS=232, BW=931KiB/s (953kB/s)(9328KiB/10018msec)
00:26:10.553      slat (nsec): min=5858, max=64600, avg=8837.17, stdev=4975.75
00:26:10.553      clat (usec): min=348, max=41720, avg=17155.89, stdev=19956.61
00:26:10.553       lat (usec): min=354, max=41743, avg=17164.73, stdev=19956.59
00:26:10.553      clat percentiles (usec):
00:26:10.553       |  1.00th=[  355],  5.00th=[  363], 10.00th=[  371], 20.00th=[  379],
00:26:10.553       | 30.00th=[  392], 40.00th=[  408], 50.00th=[  441], 60.00th=[40633],
00:26:10.553       | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157],
00:26:10.553       | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681],
00:26:10.553       | 99.99th=[41681]
00:26:10.553     bw (  KiB/s): min=  576, max= 1472, per=54.89%, avg=931.20, stdev=224.58, samples=20
00:26:10.553     iops        : min=  144, max=  368, avg=232.80, stdev=56.14, samples=20
00:26:10.553    lat (usec)   : 500=56.05%, 750=2.27%, 1000=0.17%
00:26:10.553    lat (msec)   : 2=0.17%, 50=41.34%
00:26:10.553    cpu          : usr=97.74%, sys=1.88%, ctx=12, majf=0, minf=0
00:26:10.553    IO depths    : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:26:10.553       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:10.553       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:10.553       issued rwts: total=2332,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:10.553       latency   : target=0, window=0, percentile=100.00%, depth=4
00:26:10.553  
00:26:10.553  Run status group 0 (all jobs):
00:26:10.553     READ: bw=1696KiB/s (1737kB/s), 766KiB/s-931KiB/s (784kB/s-953kB/s), io=16.6MiB (17.4MB), run=10008-10018msec
00:26:10.553   06:36:26	-- target/dif.sh@96 -- # destroy_subsystems 0 1
00:26:10.553   06:36:26	-- target/dif.sh@43 -- # local sub
00:26:10.553   06:36:26	-- target/dif.sh@45 -- # for sub in "$@"
00:26:10.553   06:36:26	-- target/dif.sh@46 -- # destroy_subsystem 0
00:26:10.553   06:36:26	-- target/dif.sh@36 -- # local sub_id=0
00:26:10.553   06:36:26	-- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:26:10.553   06:36:26	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:10.553   06:36:26	-- common/autotest_common.sh@10 -- # set +x
00:26:10.553   06:36:26	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:10.553   06:36:26	-- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0
00:26:10.553   06:36:26	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:10.553   06:36:26	-- common/autotest_common.sh@10 -- # set +x
00:26:10.553   06:36:26	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:10.553   06:36:26	-- target/dif.sh@45 -- # for sub in "$@"
00:26:10.553   06:36:26	-- target/dif.sh@46 -- # destroy_subsystem 1
00:26:10.553   06:36:26	-- target/dif.sh@36 -- # local sub_id=1
00:26:10.553   06:36:26	-- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:26:10.553   06:36:26	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:10.553   06:36:26	-- common/autotest_common.sh@10 -- # set +x
00:26:10.553   06:36:26	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:10.553   06:36:26	-- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1
00:26:10.553   06:36:26	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:10.553   06:36:26	-- common/autotest_common.sh@10 -- # set +x
00:26:10.553  ************************************
00:26:10.553  END TEST fio_dif_1_multi_subsystems
00:26:10.553  ************************************
00:26:10.553   06:36:26	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:10.553  
00:26:10.553  real	0m11.233s
00:26:10.553  user	0m20.400s
00:26:10.553  sys	0m0.687s
00:26:10.553   06:36:26	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:10.553   06:36:26	-- common/autotest_common.sh@10 -- # set +x
00:26:10.553   06:36:27	-- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params
00:26:10.553   06:36:27	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:10.553   06:36:27	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:10.553   06:36:27	-- common/autotest_common.sh@10 -- # set +x
00:26:10.553  ************************************
00:26:10.553  START TEST fio_dif_rand_params
00:26:10.553  ************************************
00:26:10.553   06:36:27	-- common/autotest_common.sh@1114 -- # fio_dif_rand_params
00:26:10.553   06:36:27	-- target/dif.sh@100 -- # local NULL_DIF
00:26:10.553   06:36:27	-- target/dif.sh@101 -- # local bs numjobs runtime iodepth files
00:26:10.553   06:36:27	-- target/dif.sh@103 -- # NULL_DIF=3
00:26:10.553   06:36:27	-- target/dif.sh@103 -- # bs=128k
00:26:10.553   06:36:27	-- target/dif.sh@103 -- # numjobs=3
00:26:10.553   06:36:27	-- target/dif.sh@103 -- # iodepth=3
00:26:10.553   06:36:27	-- target/dif.sh@103 -- # runtime=5
00:26:10.553   06:36:27	-- target/dif.sh@105 -- # create_subsystems 0
00:26:10.553   06:36:27	-- target/dif.sh@28 -- # local sub
00:26:10.553   06:36:27	-- target/dif.sh@30 -- # for sub in "$@"
00:26:10.553   06:36:27	-- target/dif.sh@31 -- # create_subsystem 0
00:26:10.553   06:36:27	-- target/dif.sh@18 -- # local sub_id=0
00:26:10.553   06:36:27	-- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3
00:26:10.553   06:36:27	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:10.553   06:36:27	-- common/autotest_common.sh@10 -- # set +x
00:26:10.553  bdev_null0
00:26:10.553   06:36:27	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:10.553   06:36:27	-- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host
00:26:10.553   06:36:27	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:10.553   06:36:27	-- common/autotest_common.sh@10 -- # set +x
00:26:10.553   06:36:27	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:10.553   06:36:27	-- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0
00:26:10.553   06:36:27	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:10.553   06:36:27	-- common/autotest_common.sh@10 -- # set +x
00:26:10.553   06:36:27	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:10.553   06:36:27	-- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:26:10.553   06:36:27	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:10.553   06:36:27	-- common/autotest_common.sh@10 -- # set +x
00:26:10.553  [2024-12-16 06:36:27.051542] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:26:10.553   06:36:27	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:10.553   06:36:27	-- target/dif.sh@106 -- # fio /dev/fd/62
00:26:10.553    06:36:27	-- target/dif.sh@106 -- # create_json_sub_conf 0
00:26:10.553    06:36:27	-- target/dif.sh@51 -- # gen_nvmf_target_json 0
00:26:10.553    06:36:27	-- nvmf/common.sh@520 -- # config=()
00:26:10.553    06:36:27	-- nvmf/common.sh@520 -- # local subsystem config
00:26:10.553   06:36:27	-- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:26:10.553    06:36:27	-- nvmf/common.sh@522 -- # for subsystem in "${@:-1}"
00:26:10.553   06:36:27	-- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:26:10.553    06:36:27	-- target/dif.sh@82 -- # gen_fio_conf
00:26:10.553    06:36:27	-- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF
00:26:10.553  {
00:26:10.553    "params": {
00:26:10.553      "name": "Nvme$subsystem",
00:26:10.553      "trtype": "$TEST_TRANSPORT",
00:26:10.553      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:10.553      "adrfam": "ipv4",
00:26:10.553      "trsvcid": "$NVMF_PORT",
00:26:10.553      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:10.553      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:10.553      "hdgst": ${hdgst:-false},
00:26:10.553      "ddgst": ${ddgst:-false}
00:26:10.553    },
00:26:10.553    "method": "bdev_nvme_attach_controller"
00:26:10.553  }
00:26:10.553  EOF
00:26:10.553  )")
00:26:10.553   06:36:27	-- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio
00:26:10.553    06:36:27	-- target/dif.sh@54 -- # local file
00:26:10.553   06:36:27	-- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:26:10.553    06:36:27	-- target/dif.sh@56 -- # cat
00:26:10.553   06:36:27	-- common/autotest_common.sh@1328 -- # local sanitizers
00:26:10.553   06:36:27	-- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:26:10.553   06:36:27	-- common/autotest_common.sh@1330 -- # shift
00:26:10.554     06:36:27	-- nvmf/common.sh@542 -- # cat
00:26:10.554   06:36:27	-- common/autotest_common.sh@1332 -- # local asan_lib=
00:26:10.554   06:36:27	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:26:10.554    06:36:27	-- target/dif.sh@72 -- # (( file = 1 ))
00:26:10.554    06:36:27	-- target/dif.sh@72 -- # (( file <= files ))
00:26:10.554    06:36:27	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:26:10.554    06:36:27	-- common/autotest_common.sh@1334 -- # grep libasan
00:26:10.554    06:36:27	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:26:10.554    06:36:27	-- nvmf/common.sh@544 -- # jq .
00:26:10.554     06:36:27	-- nvmf/common.sh@545 -- # IFS=,
00:26:10.554     06:36:27	-- nvmf/common.sh@546 -- # printf '%s\n' '{
00:26:10.554    "params": {
00:26:10.554      "name": "Nvme0",
00:26:10.554      "trtype": "tcp",
00:26:10.554      "traddr": "10.0.0.2",
00:26:10.554      "adrfam": "ipv4",
00:26:10.554      "trsvcid": "4420",
00:26:10.554      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:26:10.554      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:26:10.554      "hdgst": false,
00:26:10.554      "ddgst": false
00:26:10.554    },
00:26:10.554    "method": "bdev_nvme_attach_controller"
00:26:10.554  }'
00:26:10.554   06:36:27	-- common/autotest_common.sh@1334 -- # asan_lib=
00:26:10.554   06:36:27	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:26:10.554   06:36:27	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:26:10.554    06:36:27	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:26:10.554    06:36:27	-- common/autotest_common.sh@1334 -- # grep libclang_rt.asan
00:26:10.554    06:36:27	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:26:10.554   06:36:27	-- common/autotest_common.sh@1334 -- # asan_lib=
00:26:10.554   06:36:27	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:26:10.554   06:36:27	-- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:26:10.554   06:36:27	-- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:26:10.554  filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3
00:26:10.554  ...
00:26:10.554  fio-3.35
00:26:10.554  Starting 3 threads
00:26:10.812  [2024-12-16 06:36:27.741017] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another.
00:26:10.812  [2024-12-16 06:36:27.741400] rpc.c:  90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock
00:26:16.082  
00:26:16.082  filename0: (groupid=0, jobs=1): err= 0: pid=91757: Mon Dec 16 06:36:32 2024
00:26:16.082    read: IOPS=331, BW=41.5MiB/s (43.5MB/s)(208MiB/5004msec)
00:26:16.082      slat (nsec): min=6097, max=48707, avg=10941.67, stdev=5072.52
00:26:16.082      clat (usec): min=3425, max=57233, avg=9021.76, stdev=4220.74
00:26:16.082       lat (usec): min=3434, max=57240, avg=9032.70, stdev=4221.31
00:26:16.082      clat percentiles (usec):
00:26:16.082       |  1.00th=[ 3556],  5.00th=[ 3621], 10.00th=[ 3654], 20.00th=[ 5538],
00:26:16.082       | 30.00th=[ 7373], 40.00th=[ 7701], 50.00th=[ 8356], 60.00th=[10683],
00:26:16.082       | 70.00th=[11600], 80.00th=[12125], 90.00th=[12649], 95.00th=[13042],
00:26:16.082       | 99.00th=[19006], 99.50th=[20841], 99.90th=[55313], 99.95th=[57410],
00:26:16.082       | 99.99th=[57410]
00:26:16.082     bw (  KiB/s): min=30976, max=49152, per=39.62%, avg=41813.33, stdev=5504.00, samples=9
00:26:16.082     iops        : min=  242, max=  384, avg=326.67, stdev=43.00, samples=9
00:26:16.082    lat (msec)   : 4=16.45%, 10=40.72%, 20=42.23%, 50=0.42%, 100=0.18%
00:26:16.082    cpu          : usr=93.78%, sys=4.58%, ctx=5, majf=0, minf=0
00:26:16.082    IO depths    : 1=17.2%, 2=82.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:26:16.082       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:16.082       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:16.082       issued rwts: total=1660,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:16.082       latency   : target=0, window=0, percentile=100.00%, depth=3
00:26:16.082  filename0: (groupid=0, jobs=1): err= 0: pid=91758: Mon Dec 16 06:36:32 2024
00:26:16.082    read: IOPS=231, BW=29.0MiB/s (30.4MB/s)(145MiB/5003msec)
00:26:16.082      slat (nsec): min=6156, max=92797, avg=13907.45, stdev=6353.72
00:26:16.082      clat (usec): min=5551, max=51688, avg=12926.96, stdev=12134.32
00:26:16.082       lat (usec): min=5563, max=51695, avg=12940.87, stdev=12134.35
00:26:16.082      clat percentiles (usec):
00:26:16.082       |  1.00th=[ 5932],  5.00th=[ 6521], 10.00th=[ 6915], 20.00th=[ 7832],
00:26:16.082       | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9503],
00:26:16.082       | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[14484], 95.00th=[49546],
00:26:16.082       | 99.00th=[50594], 99.50th=[51119], 99.90th=[51643], 99.95th=[51643],
00:26:16.082       | 99.99th=[51643]
00:26:16.082     bw (  KiB/s): min=16896, max=39936, per=27.92%, avg=29468.44, stdev=7048.66, samples=9
00:26:16.082     iops        : min=  132, max=  312, avg=230.22, stdev=55.07, samples=9
00:26:16.082    lat (msec)   : 10=75.58%, 20=14.58%, 50=6.13%, 100=3.71%
00:26:16.082    cpu          : usr=93.60%, sys=4.62%, ctx=6, majf=0, minf=0
00:26:16.082    IO depths    : 1=6.7%, 2=93.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:26:16.082       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:16.082       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:16.082       issued rwts: total=1159,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:16.082       latency   : target=0, window=0, percentile=100.00%, depth=3
00:26:16.082  filename0: (groupid=0, jobs=1): err= 0: pid=91759: Mon Dec 16 06:36:32 2024
00:26:16.082    read: IOPS=261, BW=32.7MiB/s (34.3MB/s)(164MiB/5005msec)
00:26:16.082      slat (nsec): min=6111, max=50780, avg=13709.35, stdev=6127.44
00:26:16.082      clat (usec): min=3697, max=55556, avg=11458.68, stdev=9300.40
00:26:16.082       lat (usec): min=3704, max=55567, avg=11472.39, stdev=9300.98
00:26:16.082      clat percentiles (usec):
00:26:16.082       |  1.00th=[ 5342],  5.00th=[ 5866], 10.00th=[ 6325], 20.00th=[ 6718],
00:26:16.082       | 30.00th=[ 7111], 40.00th=[ 9241], 50.00th=[10159], 60.00th=[10683],
00:26:16.082       | 70.00th=[11076], 80.00th=[11600], 90.00th=[12518], 95.00th=[46400],
00:26:16.082       | 99.00th=[51643], 99.50th=[52167], 99.90th=[54264], 99.95th=[55313],
00:26:16.082       | 99.99th=[55313]
00:26:16.082     bw (  KiB/s): min=26880, max=40960, per=31.66%, avg=33414.80, stdev=5243.59, samples=10
00:26:16.082     iops        : min=  210, max=  320, avg=261.00, stdev=40.96, samples=10
00:26:16.082    lat (msec)   : 4=0.23%, 10=46.33%, 20=48.17%, 50=2.60%, 100=2.68%
00:26:16.082    cpu          : usr=94.26%, sys=4.32%, ctx=8, majf=0, minf=0
00:26:16.082    IO depths    : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:26:16.082       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:16.082       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:16.082       issued rwts: total=1308,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:16.082       latency   : target=0, window=0, percentile=100.00%, depth=3
00:26:16.082  
00:26:16.082  Run status group 0 (all jobs):
00:26:16.082     READ: bw=103MiB/s (108MB/s), 29.0MiB/s-41.5MiB/s (30.4MB/s-43.5MB/s), io=516MiB (541MB), run=5003-5005msec
00:26:16.342   06:36:33	-- target/dif.sh@107 -- # destroy_subsystems 0
00:26:16.342   06:36:33	-- target/dif.sh@43 -- # local sub
00:26:16.342   06:36:33	-- target/dif.sh@45 -- # for sub in "$@"
00:26:16.342   06:36:33	-- target/dif.sh@46 -- # destroy_subsystem 0
00:26:16.342   06:36:33	-- target/dif.sh@36 -- # local sub_id=0
00:26:16.342   06:36:33	-- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:26:16.342   06:36:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:16.342   06:36:33	-- common/autotest_common.sh@10 -- # set +x
00:26:16.342   06:36:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:16.342   06:36:33	-- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0
00:26:16.342   06:36:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:16.342   06:36:33	-- common/autotest_common.sh@10 -- # set +x
00:26:16.342   06:36:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:16.342   06:36:33	-- target/dif.sh@109 -- # NULL_DIF=2
00:26:16.342   06:36:33	-- target/dif.sh@109 -- # bs=4k
00:26:16.342   06:36:33	-- target/dif.sh@109 -- # numjobs=8
00:26:16.342   06:36:33	-- target/dif.sh@109 -- # iodepth=16
00:26:16.342   06:36:33	-- target/dif.sh@109 -- # runtime=
00:26:16.342   06:36:33	-- target/dif.sh@109 -- # files=2
00:26:16.342   06:36:33	-- target/dif.sh@111 -- # create_subsystems 0 1 2
00:26:16.342   06:36:33	-- target/dif.sh@28 -- # local sub
00:26:16.342   06:36:33	-- target/dif.sh@30 -- # for sub in "$@"
00:26:16.342   06:36:33	-- target/dif.sh@31 -- # create_subsystem 0
00:26:16.342   06:36:33	-- target/dif.sh@18 -- # local sub_id=0
00:26:16.342   06:36:33	-- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2
00:26:16.342   06:36:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:16.342   06:36:33	-- common/autotest_common.sh@10 -- # set +x
00:26:16.342  bdev_null0
00:26:16.342   06:36:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:16.342   06:36:33	-- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host
00:26:16.342   06:36:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:16.342   06:36:33	-- common/autotest_common.sh@10 -- # set +x
00:26:16.342   06:36:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:16.342   06:36:33	-- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0
00:26:16.342   06:36:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:16.342   06:36:33	-- common/autotest_common.sh@10 -- # set +x
00:26:16.342   06:36:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:16.342   06:36:33	-- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:26:16.342   06:36:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:16.342   06:36:33	-- common/autotest_common.sh@10 -- # set +x
00:26:16.342  [2024-12-16 06:36:33.189563] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:26:16.342   06:36:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:16.342   06:36:33	-- target/dif.sh@30 -- # for sub in "$@"
00:26:16.342   06:36:33	-- target/dif.sh@31 -- # create_subsystem 1
00:26:16.342   06:36:33	-- target/dif.sh@18 -- # local sub_id=1
00:26:16.342   06:36:33	-- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2
00:26:16.342   06:36:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:16.342   06:36:33	-- common/autotest_common.sh@10 -- # set +x
00:26:16.342  bdev_null1
00:26:16.342   06:36:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:16.342   06:36:33	-- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host
00:26:16.342   06:36:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:16.342   06:36:33	-- common/autotest_common.sh@10 -- # set +x
00:26:16.342   06:36:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:16.342   06:36:33	-- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1
00:26:16.342   06:36:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:16.342   06:36:33	-- common/autotest_common.sh@10 -- # set +x
00:26:16.342   06:36:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:16.342   06:36:33	-- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:26:16.342   06:36:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:16.342   06:36:33	-- common/autotest_common.sh@10 -- # set +x
00:26:16.342   06:36:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:16.342   06:36:33	-- target/dif.sh@30 -- # for sub in "$@"
00:26:16.342   06:36:33	-- target/dif.sh@31 -- # create_subsystem 2
00:26:16.342   06:36:33	-- target/dif.sh@18 -- # local sub_id=2
00:26:16.342   06:36:33	-- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2
00:26:16.342   06:36:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:16.342   06:36:33	-- common/autotest_common.sh@10 -- # set +x
00:26:16.342  bdev_null2
00:26:16.342   06:36:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:16.342   06:36:33	-- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host
00:26:16.342   06:36:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:16.342   06:36:33	-- common/autotest_common.sh@10 -- # set +x
00:26:16.342   06:36:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:16.342   06:36:33	-- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2
00:26:16.342   06:36:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:16.342   06:36:33	-- common/autotest_common.sh@10 -- # set +x
00:26:16.342   06:36:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:16.342   06:36:33	-- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420
00:26:16.342   06:36:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:16.342   06:36:33	-- common/autotest_common.sh@10 -- # set +x
00:26:16.342   06:36:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:16.342   06:36:33	-- target/dif.sh@112 -- # fio /dev/fd/62
00:26:16.342    06:36:33	-- target/dif.sh@112 -- # create_json_sub_conf 0 1 2
00:26:16.342    06:36:33	-- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2
00:26:16.342    06:36:33	-- nvmf/common.sh@520 -- # config=()
00:26:16.342    06:36:33	-- nvmf/common.sh@520 -- # local subsystem config
00:26:16.342    06:36:33	-- nvmf/common.sh@522 -- # for subsystem in "${@:-1}"
00:26:16.342    06:36:33	-- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF
00:26:16.342  {
00:26:16.342    "params": {
00:26:16.342      "name": "Nvme$subsystem",
00:26:16.342      "trtype": "$TEST_TRANSPORT",
00:26:16.342      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:16.342      "adrfam": "ipv4",
00:26:16.342      "trsvcid": "$NVMF_PORT",
00:26:16.342      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:16.342      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:16.342      "hdgst": ${hdgst:-false},
00:26:16.342      "ddgst": ${ddgst:-false}
00:26:16.342    },
00:26:16.342    "method": "bdev_nvme_attach_controller"
00:26:16.342  }
00:26:16.342  EOF
00:26:16.342  )")
00:26:16.342   06:36:33	-- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:26:16.342   06:36:33	-- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:26:16.342   06:36:33	-- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio
00:26:16.342    06:36:33	-- target/dif.sh@82 -- # gen_fio_conf
00:26:16.342   06:36:33	-- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:26:16.342   06:36:33	-- common/autotest_common.sh@1328 -- # local sanitizers
00:26:16.342    06:36:33	-- target/dif.sh@54 -- # local file
00:26:16.342   06:36:33	-- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:26:16.342    06:36:33	-- target/dif.sh@56 -- # cat
00:26:16.342   06:36:33	-- common/autotest_common.sh@1330 -- # shift
00:26:16.342   06:36:33	-- common/autotest_common.sh@1332 -- # local asan_lib=
00:26:16.342   06:36:33	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:26:16.342     06:36:33	-- nvmf/common.sh@542 -- # cat
00:26:16.342    06:36:33	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:26:16.342    06:36:33	-- common/autotest_common.sh@1334 -- # grep libasan
00:26:16.342    06:36:33	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:26:16.342    06:36:33	-- target/dif.sh@72 -- # (( file = 1 ))
00:26:16.343    06:36:33	-- target/dif.sh@72 -- # (( file <= files ))
00:26:16.343    06:36:33	-- target/dif.sh@73 -- # cat
00:26:16.343    06:36:33	-- nvmf/common.sh@522 -- # for subsystem in "${@:-1}"
00:26:16.343    06:36:33	-- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF
00:26:16.343  {
00:26:16.343    "params": {
00:26:16.343      "name": "Nvme$subsystem",
00:26:16.343      "trtype": "$TEST_TRANSPORT",
00:26:16.343      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:16.343      "adrfam": "ipv4",
00:26:16.343      "trsvcid": "$NVMF_PORT",
00:26:16.343      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:16.343      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:16.343      "hdgst": ${hdgst:-false},
00:26:16.343      "ddgst": ${ddgst:-false}
00:26:16.343    },
00:26:16.343    "method": "bdev_nvme_attach_controller"
00:26:16.343  }
00:26:16.343  EOF
00:26:16.343  )")
00:26:16.343     06:36:33	-- nvmf/common.sh@542 -- # cat
00:26:16.343    06:36:33	-- target/dif.sh@72 -- # (( file++ ))
00:26:16.343    06:36:33	-- target/dif.sh@72 -- # (( file <= files ))
00:26:16.343    06:36:33	-- target/dif.sh@73 -- # cat
00:26:16.343    06:36:33	-- nvmf/common.sh@522 -- # for subsystem in "${@:-1}"
00:26:16.343    06:36:33	-- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF
00:26:16.343  {
00:26:16.343    "params": {
00:26:16.343      "name": "Nvme$subsystem",
00:26:16.343      "trtype": "$TEST_TRANSPORT",
00:26:16.343      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:16.343      "adrfam": "ipv4",
00:26:16.343      "trsvcid": "$NVMF_PORT",
00:26:16.343      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:16.343      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:16.343      "hdgst": ${hdgst:-false},
00:26:16.343      "ddgst": ${ddgst:-false}
00:26:16.343    },
00:26:16.343    "method": "bdev_nvme_attach_controller"
00:26:16.343  }
00:26:16.343  EOF
00:26:16.343  )")
00:26:16.343    06:36:33	-- target/dif.sh@72 -- # (( file++ ))
00:26:16.343    06:36:33	-- target/dif.sh@72 -- # (( file <= files ))
00:26:16.343     06:36:33	-- nvmf/common.sh@542 -- # cat
00:26:16.343    06:36:33	-- nvmf/common.sh@544 -- # jq .
00:26:16.343     06:36:33	-- nvmf/common.sh@545 -- # IFS=,
00:26:16.343     06:36:33	-- nvmf/common.sh@546 -- # printf '%s\n' '{
00:26:16.343    "params": {
00:26:16.343      "name": "Nvme0",
00:26:16.343      "trtype": "tcp",
00:26:16.343      "traddr": "10.0.0.2",
00:26:16.343      "adrfam": "ipv4",
00:26:16.343      "trsvcid": "4420",
00:26:16.343      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:26:16.343      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:26:16.343      "hdgst": false,
00:26:16.343      "ddgst": false
00:26:16.343    },
00:26:16.343    "method": "bdev_nvme_attach_controller"
00:26:16.343  },{
00:26:16.343    "params": {
00:26:16.343      "name": "Nvme1",
00:26:16.343      "trtype": "tcp",
00:26:16.343      "traddr": "10.0.0.2",
00:26:16.343      "adrfam": "ipv4",
00:26:16.343      "trsvcid": "4420",
00:26:16.343      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:26:16.343      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:26:16.343      "hdgst": false,
00:26:16.343      "ddgst": false
00:26:16.343    },
00:26:16.343    "method": "bdev_nvme_attach_controller"
00:26:16.343  },{
00:26:16.343    "params": {
00:26:16.343      "name": "Nvme2",
00:26:16.343      "trtype": "tcp",
00:26:16.343      "traddr": "10.0.0.2",
00:26:16.343      "adrfam": "ipv4",
00:26:16.343      "trsvcid": "4420",
00:26:16.343      "subnqn": "nqn.2016-06.io.spdk:cnode2",
00:26:16.343      "hostnqn": "nqn.2016-06.io.spdk:host2",
00:26:16.343      "hdgst": false,
00:26:16.343      "ddgst": false
00:26:16.343    },
00:26:16.343    "method": "bdev_nvme_attach_controller"
00:26:16.343  }'
00:26:16.343   06:36:33	-- common/autotest_common.sh@1334 -- # asan_lib=
00:26:16.343   06:36:33	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:26:16.343   06:36:33	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:26:16.343    06:36:33	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:26:16.343    06:36:33	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:26:16.343    06:36:33	-- common/autotest_common.sh@1334 -- # grep libclang_rt.asan
00:26:16.601   06:36:33	-- common/autotest_common.sh@1334 -- # asan_lib=
00:26:16.601   06:36:33	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:26:16.601   06:36:33	-- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:26:16.601   06:36:33	-- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:26:16.602  filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16
00:26:16.602  ...
00:26:16.602  filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16
00:26:16.602  ...
00:26:16.602  filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16
00:26:16.602  ...
00:26:16.602  fio-3.35
00:26:16.602  Starting 24 threads
00:26:17.169  [2024-12-16 06:36:34.107599] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another.
00:26:17.169  [2024-12-16 06:36:34.107664] rpc.c:  90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock
00:26:29.375  
00:26:29.375  filename0: (groupid=0, jobs=1): err= 0: pid=91854: Mon Dec 16 06:36:44 2024
00:26:29.375    read: IOPS=265, BW=1063KiB/s (1089kB/s)(10.4MiB/10029msec)
00:26:29.375      slat (usec): min=4, max=8021, avg=21.80, stdev=260.02
00:26:29.375      clat (msec): min=24, max=113, avg=60.03, stdev=16.62
00:26:29.375       lat (msec): min=24, max=113, avg=60.05, stdev=16.63
00:26:29.375      clat percentiles (msec):
00:26:29.375       |  1.00th=[   32],  5.00th=[   36], 10.00th=[   40], 20.00th=[   45],
00:26:29.375       | 30.00th=[   49], 40.00th=[   56], 50.00th=[   61], 60.00th=[   64],
00:26:29.375       | 70.00th=[   69], 80.00th=[   74], 90.00th=[   84], 95.00th=[   90],
00:26:29.375       | 99.00th=[  104], 99.50th=[  110], 99.90th=[  112], 99.95th=[  112],
00:26:29.375       | 99.99th=[  114]
00:26:29.375     bw (  KiB/s): min=  784, max= 1360, per=4.23%, avg=1062.40, stdev=144.92, samples=20
00:26:29.375     iops        : min=  196, max=  340, avg=265.60, stdev=36.23, samples=20
00:26:29.375    lat (msec)   : 50=32.97%, 100=65.53%, 250=1.50%
00:26:29.375    cpu          : usr=34.45%, sys=0.60%, ctx=1402, majf=0, minf=9
00:26:29.375    IO depths    : 1=1.2%, 2=2.7%, 4=10.2%, 8=73.7%, 16=12.3%, 32=0.0%, >=64=0.0%
00:26:29.375       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.375       complete  : 0=0.0%, 4=90.0%, 8=5.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.375       issued rwts: total=2666,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:29.375       latency   : target=0, window=0, percentile=100.00%, depth=16
00:26:29.375  filename0: (groupid=0, jobs=1): err= 0: pid=91855: Mon Dec 16 06:36:44 2024
00:26:29.375    read: IOPS=241, BW=966KiB/s (990kB/s)(9672KiB/10009msec)
00:26:29.375      slat (usec): min=4, max=8033, avg=22.22, stdev=257.79
00:26:29.375      clat (msec): min=27, max=118, avg=66.09, stdev=16.38
00:26:29.375       lat (msec): min=27, max=118, avg=66.11, stdev=16.39
00:26:29.375      clat percentiles (msec):
00:26:29.375       |  1.00th=[   33],  5.00th=[   42], 10.00th=[   48], 20.00th=[   55],
00:26:29.375       | 30.00th=[   58], 40.00th=[   61], 50.00th=[   64], 60.00th=[   68],
00:26:29.375       | 70.00th=[   72], 80.00th=[   80], 90.00th=[   89], 95.00th=[   99],
00:26:29.375       | 99.00th=[  115], 99.50th=[  116], 99.90th=[  116], 99.95th=[  120],
00:26:29.375       | 99.99th=[  120]
00:26:29.375     bw (  KiB/s): min=  768, max= 1200, per=3.87%, avg=970.95, stdev=97.37, samples=19
00:26:29.375     iops        : min=  192, max=  300, avg=242.74, stdev=24.34, samples=19
00:26:29.375    lat (msec)   : 50=15.26%, 100=81.10%, 250=3.64%
00:26:29.375    cpu          : usr=42.57%, sys=0.55%, ctx=1150, majf=0, minf=9
00:26:29.375    IO depths    : 1=3.0%, 2=6.7%, 4=17.2%, 8=63.4%, 16=9.8%, 32=0.0%, >=64=0.0%
00:26:29.375       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.375       complete  : 0=0.0%, 4=92.0%, 8=2.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.375       issued rwts: total=2418,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:29.375       latency   : target=0, window=0, percentile=100.00%, depth=16
00:26:29.375  filename0: (groupid=0, jobs=1): err= 0: pid=91856: Mon Dec 16 06:36:44 2024
00:26:29.375    read: IOPS=267, BW=1070KiB/s (1096kB/s)(10.5MiB/10042msec)
00:26:29.375      slat (usec): min=4, max=8018, avg=19.34, stdev=226.78
00:26:29.375      clat (msec): min=24, max=146, avg=59.55, stdev=16.84
00:26:29.375       lat (msec): min=24, max=146, avg=59.57, stdev=16.85
00:26:29.375      clat percentiles (msec):
00:26:29.375       |  1.00th=[   28],  5.00th=[   36], 10.00th=[   39], 20.00th=[   47],
00:26:29.375       | 30.00th=[   51], 40.00th=[   56], 50.00th=[   59], 60.00th=[   61],
00:26:29.375       | 70.00th=[   65], 80.00th=[   72], 90.00th=[   84], 95.00th=[   90],
00:26:29.375       | 99.00th=[  109], 99.50th=[  117], 99.90th=[  146], 99.95th=[  146],
00:26:29.375       | 99.99th=[  146]
00:26:29.375     bw (  KiB/s): min=  896, max= 1248, per=4.27%, avg=1072.40, stdev=108.00, samples=20
00:26:29.375     iops        : min=  224, max=  312, avg=268.10, stdev=27.00, samples=20
00:26:29.375    lat (msec)   : 50=29.66%, 100=68.11%, 250=2.23%
00:26:29.375    cpu          : usr=33.88%, sys=0.65%, ctx=1066, majf=0, minf=9
00:26:29.375    IO depths    : 1=0.7%, 2=1.8%, 4=8.6%, 8=75.9%, 16=13.0%, 32=0.0%, >=64=0.0%
00:26:29.375       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.375       complete  : 0=0.0%, 4=89.7%, 8=6.0%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.375       issued rwts: total=2687,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:29.375       latency   : target=0, window=0, percentile=100.00%, depth=16
00:26:29.375  filename0: (groupid=0, jobs=1): err= 0: pid=91857: Mon Dec 16 06:36:44 2024
00:26:29.375    read: IOPS=262, BW=1050KiB/s (1075kB/s)(10.3MiB/10040msec)
00:26:29.375      slat (usec): min=6, max=4027, avg=15.69, stdev=110.66
00:26:29.375      clat (msec): min=22, max=131, avg=60.74, stdev=18.27
00:26:29.375       lat (msec): min=22, max=131, avg=60.76, stdev=18.27
00:26:29.375      clat percentiles (msec):
00:26:29.375       |  1.00th=[   32],  5.00th=[   36], 10.00th=[   39], 20.00th=[   46],
00:26:29.375       | 30.00th=[   50], 40.00th=[   57], 50.00th=[   61], 60.00th=[   63],
00:26:29.375       | 70.00th=[   69], 80.00th=[   73], 90.00th=[   84], 95.00th=[   96],
00:26:29.375       | 99.00th=[  124], 99.50th=[  131], 99.90th=[  132], 99.95th=[  132],
00:26:29.375       | 99.99th=[  132]
00:26:29.375     bw (  KiB/s): min=  696, max= 1456, per=4.17%, avg=1048.00, stdev=156.81, samples=20
00:26:29.375     iops        : min=  174, max=  364, avg=262.00, stdev=39.20, samples=20
00:26:29.375    lat (msec)   : 50=31.98%, 100=65.33%, 250=2.69%
00:26:29.375    cpu          : usr=37.19%, sys=0.58%, ctx=1013, majf=0, minf=9
00:26:29.375    IO depths    : 1=1.3%, 2=3.0%, 4=10.3%, 8=73.1%, 16=12.3%, 32=0.0%, >=64=0.0%
00:26:29.375       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.375       complete  : 0=0.0%, 4=90.2%, 8=5.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.375       issued rwts: total=2636,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:29.375       latency   : target=0, window=0, percentile=100.00%, depth=16
00:26:29.375  filename0: (groupid=0, jobs=1): err= 0: pid=91858: Mon Dec 16 06:36:44 2024
00:26:29.375    read: IOPS=255, BW=1021KiB/s (1045kB/s)(10.0MiB/10060msec)
00:26:29.375      slat (usec): min=4, max=8028, avg=33.87, stdev=402.38
00:26:29.375      clat (msec): min=27, max=143, avg=62.44, stdev=18.99
00:26:29.375       lat (msec): min=27, max=143, avg=62.48, stdev=19.00
00:26:29.375      clat percentiles (msec):
00:26:29.375       |  1.00th=[   32],  5.00th=[   36], 10.00th=[   40], 20.00th=[   47],
00:26:29.375       | 30.00th=[   50], 40.00th=[   58], 50.00th=[   61], 60.00th=[   63],
00:26:29.375       | 70.00th=[   72], 80.00th=[   79], 90.00th=[   85], 95.00th=[   96],
00:26:29.375       | 99.00th=[  121], 99.50th=[  132], 99.90th=[  144], 99.95th=[  144],
00:26:29.375       | 99.99th=[  144]
00:26:29.375     bw (  KiB/s): min=  640, max= 1304, per=4.07%, avg=1020.30, stdev=159.21, samples=20
00:26:29.375     iops        : min=  160, max=  326, avg=255.05, stdev=39.82, samples=20
00:26:29.375    lat (msec)   : 50=31.52%, 100=64.20%, 250=4.29%
00:26:29.375    cpu          : usr=33.92%, sys=0.47%, ctx=916, majf=0, minf=9
00:26:29.375    IO depths    : 1=1.4%, 2=3.4%, 4=10.5%, 8=72.7%, 16=12.0%, 32=0.0%, >=64=0.0%
00:26:29.375       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.375       complete  : 0=0.0%, 4=90.4%, 8=4.9%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.375       issued rwts: total=2567,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:29.375       latency   : target=0, window=0, percentile=100.00%, depth=16
00:26:29.375  filename0: (groupid=0, jobs=1): err= 0: pid=91859: Mon Dec 16 06:36:44 2024
00:26:29.376    read: IOPS=258, BW=1033KiB/s (1058kB/s)(10.1MiB/10002msec)
00:26:29.376      slat (usec): min=4, max=4063, avg=12.92, stdev=80.02
00:26:29.376      clat (msec): min=20, max=121, avg=61.88, stdev=17.36
00:26:29.376       lat (msec): min=20, max=121, avg=61.90, stdev=17.36
00:26:29.376      clat percentiles (msec):
00:26:29.376       |  1.00th=[   25],  5.00th=[   36], 10.00th=[   40], 20.00th=[   48],
00:26:29.376       | 30.00th=[   54], 40.00th=[   59], 50.00th=[   61], 60.00th=[   62],
00:26:29.376       | 70.00th=[   71], 80.00th=[   77], 90.00th=[   85], 95.00th=[   93],
00:26:29.376       | 99.00th=[  108], 99.50th=[  117], 99.90th=[  123], 99.95th=[  123],
00:26:29.376       | 99.99th=[  123]
00:26:29.376     bw (  KiB/s): min=  856, max= 1424, per=4.17%, avg=1047.21, stdev=144.06, samples=19
00:26:29.376     iops        : min=  214, max=  356, avg=261.79, stdev=36.03, samples=19
00:26:29.376    lat (msec)   : 50=26.60%, 100=71.54%, 250=1.86%
00:26:29.376    cpu          : usr=37.18%, sys=0.52%, ctx=1021, majf=0, minf=9
00:26:29.376    IO depths    : 1=1.0%, 2=2.4%, 4=9.8%, 8=73.8%, 16=13.1%, 32=0.0%, >=64=0.0%
00:26:29.376       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.376       complete  : 0=0.0%, 4=90.1%, 8=5.7%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.376       issued rwts: total=2583,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:29.376       latency   : target=0, window=0, percentile=100.00%, depth=16
00:26:29.376  filename0: (groupid=0, jobs=1): err= 0: pid=91860: Mon Dec 16 06:36:44 2024
00:26:29.376    read: IOPS=310, BW=1241KiB/s (1271kB/s)(12.2MiB/10056msec)
00:26:29.376      slat (usec): min=4, max=9025, avg=24.02, stdev=292.22
00:26:29.376      clat (msec): min=2, max=116, avg=51.37, stdev=17.50
00:26:29.376       lat (msec): min=2, max=116, avg=51.39, stdev=17.51
00:26:29.376      clat percentiles (msec):
00:26:29.376       |  1.00th=[   10],  5.00th=[   31], 10.00th=[   34], 20.00th=[   39],
00:26:29.376       | 30.00th=[   41], 40.00th=[   44], 50.00th=[   48], 60.00th=[   54],
00:26:29.376       | 70.00th=[   60], 80.00th=[   66], 90.00th=[   75], 95.00th=[   83],
00:26:29.376       | 99.00th=[   96], 99.50th=[  105], 99.90th=[  116], 99.95th=[  116],
00:26:29.376       | 99.99th=[  116]
00:26:29.376     bw (  KiB/s): min=  736, max= 1736, per=4.95%, avg=1241.15, stdev=227.10, samples=20
00:26:29.376     iops        : min=  184, max=  434, avg=310.25, stdev=56.80, samples=20
00:26:29.376    lat (msec)   : 4=0.51%, 10=0.51%, 20=1.54%, 50=52.29%, 100=44.38%
00:26:29.376    lat (msec)   : 250=0.77%
00:26:29.376    cpu          : usr=42.53%, sys=0.68%, ctx=1196, majf=0, minf=0
00:26:29.376    IO depths    : 1=0.4%, 2=0.7%, 4=6.3%, 8=79.2%, 16=13.5%, 32=0.0%, >=64=0.0%
00:26:29.376       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.376       complete  : 0=0.0%, 4=89.2%, 8=6.5%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.376       issued rwts: total=3121,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:29.376       latency   : target=0, window=0, percentile=100.00%, depth=16
00:26:29.376  filename0: (groupid=0, jobs=1): err= 0: pid=91861: Mon Dec 16 06:36:44 2024
00:26:29.376    read: IOPS=230, BW=924KiB/s (946kB/s)(9240KiB/10002msec)
00:26:29.376      slat (usec): min=4, max=8019, avg=26.17, stdev=304.89
00:26:29.376      clat (msec): min=25, max=129, avg=69.10, stdev=16.34
00:26:29.376       lat (msec): min=25, max=129, avg=69.13, stdev=16.34
00:26:29.376      clat percentiles (msec):
00:26:29.376       |  1.00th=[   32],  5.00th=[   47], 10.00th=[   50], 20.00th=[   59],
00:26:29.376       | 30.00th=[   61], 40.00th=[   62], 50.00th=[   69], 60.00th=[   72],
00:26:29.376       | 70.00th=[   77], 80.00th=[   84], 90.00th=[   90], 95.00th=[   96],
00:26:29.376       | 99.00th=[  108], 99.50th=[  121], 99.90th=[  130], 99.95th=[  130],
00:26:29.376       | 99.99th=[  130]
00:26:29.376     bw (  KiB/s): min=  688, max= 1072, per=3.70%, avg=928.00, stdev=103.31, samples=19
00:26:29.376     iops        : min=  172, max=  268, avg=232.00, stdev=25.83, samples=19
00:26:29.376    lat (msec)   : 50=10.39%, 100=85.11%, 250=4.50%
00:26:29.376    cpu          : usr=33.93%, sys=0.56%, ctx=995, majf=0, minf=9
00:26:29.376    IO depths    : 1=1.7%, 2=3.8%, 4=13.0%, 8=69.6%, 16=11.9%, 32=0.0%, >=64=0.0%
00:26:29.376       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.376       complete  : 0=0.0%, 4=90.7%, 8=4.7%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.376       issued rwts: total=2310,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:29.376       latency   : target=0, window=0, percentile=100.00%, depth=16
00:26:29.376  filename1: (groupid=0, jobs=1): err= 0: pid=91862: Mon Dec 16 06:36:44 2024
00:26:29.376    read: IOPS=302, BW=1210KiB/s (1239kB/s)(11.9MiB/10054msec)
00:26:29.376      slat (usec): min=3, max=4039, avg=16.45, stdev=131.85
00:26:29.376      clat (msec): min=5, max=127, avg=52.71, stdev=16.97
00:26:29.376       lat (msec): min=5, max=127, avg=52.72, stdev=16.97
00:26:29.376      clat percentiles (msec):
00:26:29.376       |  1.00th=[    7],  5.00th=[   32], 10.00th=[   35], 20.00th=[   40],
00:26:29.376       | 30.00th=[   44], 40.00th=[   47], 50.00th=[   53], 60.00th=[   56],
00:26:29.376       | 70.00th=[   61], 80.00th=[   65], 90.00th=[   72], 95.00th=[   81],
00:26:29.376       | 99.00th=[  109], 99.50th=[  112], 99.90th=[  128], 99.95th=[  128],
00:26:29.376       | 99.99th=[  128]
00:26:29.376     bw (  KiB/s): min=  864, max= 1632, per=4.82%, avg=1209.45, stdev=186.66, samples=20
00:26:29.376     iops        : min=  216, max=  408, avg=302.35, stdev=46.66, samples=20
00:26:29.376    lat (msec)   : 10=1.58%, 20=0.53%, 50=45.54%, 100=50.97%, 250=1.38%
00:26:29.376    cpu          : usr=46.16%, sys=0.54%, ctx=1572, majf=0, minf=9
00:26:29.376    IO depths    : 1=0.9%, 2=2.1%, 4=9.0%, 8=75.4%, 16=12.5%, 32=0.0%, >=64=0.0%
00:26:29.376       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.376       complete  : 0=0.0%, 4=89.7%, 8=5.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.376       issued rwts: total=3041,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:29.376       latency   : target=0, window=0, percentile=100.00%, depth=16
00:26:29.376  filename1: (groupid=0, jobs=1): err= 0: pid=91863: Mon Dec 16 06:36:44 2024
00:26:29.376    read: IOPS=255, BW=1021KiB/s (1045kB/s)(10.00MiB/10028msec)
00:26:29.376      slat (nsec): min=3573, max=57241, avg=12366.28, stdev=7425.08
00:26:29.376      clat (msec): min=25, max=143, avg=62.62, stdev=18.23
00:26:29.376       lat (msec): min=25, max=143, avg=62.63, stdev=18.23
00:26:29.376      clat percentiles (msec):
00:26:29.376       |  1.00th=[   34],  5.00th=[   36], 10.00th=[   39], 20.00th=[   48],
00:26:29.376       | 30.00th=[   53], 40.00th=[   59], 50.00th=[   61], 60.00th=[   63],
00:26:29.376       | 70.00th=[   71], 80.00th=[   78], 90.00th=[   85], 95.00th=[   94],
00:26:29.376       | 99.00th=[  121], 99.50th=[  121], 99.90th=[  144], 99.95th=[  144],
00:26:29.376       | 99.99th=[  144]
00:26:29.376     bw (  KiB/s): min=  640, max= 1248, per=4.05%, avg=1017.25, stdev=140.32, samples=20
00:26:29.376     iops        : min=  160, max=  312, avg=254.30, stdev=35.08, samples=20
00:26:29.376    lat (msec)   : 50=27.67%, 100=68.46%, 250=3.87%
00:26:29.376    cpu          : usr=34.22%, sys=0.51%, ctx=1078, majf=0, minf=9
00:26:29.376    IO depths    : 1=1.3%, 2=2.9%, 4=9.6%, 8=73.7%, 16=12.5%, 32=0.0%, >=64=0.0%
00:26:29.376       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.376       complete  : 0=0.0%, 4=90.1%, 8=5.6%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.376       issued rwts: total=2559,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:29.376       latency   : target=0, window=0, percentile=100.00%, depth=16
00:26:29.376  filename1: (groupid=0, jobs=1): err= 0: pid=91864: Mon Dec 16 06:36:44 2024
00:26:29.376    read: IOPS=240, BW=961KiB/s (984kB/s)(9612KiB/10001msec)
00:26:29.376      slat (usec): min=4, max=8033, avg=20.94, stdev=245.25
00:26:29.376      clat (msec): min=2, max=144, avg=66.46, stdev=18.54
00:26:29.376       lat (msec): min=2, max=144, avg=66.48, stdev=18.53
00:26:29.376      clat percentiles (msec):
00:26:29.376       |  1.00th=[   11],  5.00th=[   40], 10.00th=[   48], 20.00th=[   56],
00:26:29.376       | 30.00th=[   59], 40.00th=[   61], 50.00th=[   62], 60.00th=[   69],
00:26:29.376       | 70.00th=[   72], 80.00th=[   83], 90.00th=[   94], 95.00th=[   96],
00:26:29.376       | 99.00th=[  118], 99.50th=[  125], 99.90th=[  144], 99.95th=[  144],
00:26:29.376       | 99.99th=[  144]
00:26:29.376     bw (  KiB/s): min=  768, max= 1224, per=3.82%, avg=957.89, stdev=128.41, samples=19
00:26:29.376     iops        : min=  192, max=  306, avg=239.47, stdev=32.10, samples=19
00:26:29.376    lat (msec)   : 4=0.67%, 20=0.67%, 50=11.94%, 100=82.48%, 250=4.24%
00:26:29.376    cpu          : usr=38.16%, sys=0.51%, ctx=1041, majf=0, minf=9
00:26:29.376    IO depths    : 1=2.8%, 2=6.3%, 4=16.6%, 8=64.0%, 16=10.3%, 32=0.0%, >=64=0.0%
00:26:29.376       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.376       complete  : 0=0.0%, 4=91.9%, 8=2.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.376       issued rwts: total=2403,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:29.376       latency   : target=0, window=0, percentile=100.00%, depth=16
00:26:29.376  filename1: (groupid=0, jobs=1): err= 0: pid=91865: Mon Dec 16 06:36:44 2024
00:26:29.376    read: IOPS=250, BW=1001KiB/s (1025kB/s)(9.80MiB/10025msec)
00:26:29.376      slat (nsec): min=5135, max=52011, avg=12264.36, stdev=7571.57
00:26:29.376      clat (msec): min=23, max=124, avg=63.88, stdev=17.36
00:26:29.376       lat (msec): min=23, max=124, avg=63.89, stdev=17.36
00:26:29.376      clat percentiles (msec):
00:26:29.376       |  1.00th=[   34],  5.00th=[   38], 10.00th=[   42], 20.00th=[   48],
00:26:29.376       | 30.00th=[   58], 40.00th=[   60], 50.00th=[   61], 60.00th=[   66],
00:26:29.376       | 70.00th=[   72], 80.00th=[   75], 90.00th=[   85], 95.00th=[   96],
00:26:29.376       | 99.00th=[  109], 99.50th=[  124], 99.90th=[  125], 99.95th=[  125],
00:26:29.376       | 99.99th=[  125]
00:26:29.376     bw (  KiB/s): min=  768, max= 1248, per=3.97%, avg=996.85, stdev=130.56, samples=20
00:26:29.376     iops        : min=  192, max=  312, avg=249.20, stdev=32.65, samples=20
00:26:29.376    lat (msec)   : 50=22.25%, 100=74.00%, 250=3.75%
00:26:29.376    cpu          : usr=37.59%, sys=0.59%, ctx=1068, majf=0, minf=9
00:26:29.376    IO depths    : 1=0.9%, 2=2.3%, 4=9.9%, 8=74.2%, 16=12.8%, 32=0.0%, >=64=0.0%
00:26:29.376       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.376       complete  : 0=0.0%, 4=90.2%, 8=5.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.376       issued rwts: total=2508,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:29.376       latency   : target=0, window=0, percentile=100.00%, depth=16
00:26:29.376  filename1: (groupid=0, jobs=1): err= 0: pid=91866: Mon Dec 16 06:36:44 2024
00:26:29.376    read: IOPS=295, BW=1181KiB/s (1210kB/s)(11.6MiB/10066msec)
00:26:29.376      slat (nsec): min=3307, max=49896, avg=10983.82, stdev=6458.71
00:26:29.376      clat (usec): min=1386, max=137710, avg=53941.02, stdev=21971.70
00:26:29.376       lat (usec): min=1392, max=137717, avg=53952.01, stdev=21972.22
00:26:29.376      clat percentiles (usec):
00:26:29.376       |  1.00th=[  1549],  5.00th=[  3425], 10.00th=[ 33817], 20.00th=[ 39060],
00:26:29.376       | 30.00th=[ 46400], 40.00th=[ 47973], 50.00th=[ 53740], 60.00th=[ 60031],
00:26:29.376       | 70.00th=[ 61604], 80.00th=[ 70779], 90.00th=[ 80217], 95.00th=[ 88605],
00:26:29.376       | 99.00th=[110625], 99.50th=[121111], 99.90th=[137364], 99.95th=[137364],
00:26:29.376       | 99.99th=[137364]
00:26:29.376     bw (  KiB/s): min=  856, max= 2799, per=4.72%, avg=1184.75, stdev=406.82, samples=20
00:26:29.376     iops        : min=  214, max=  699, avg=296.15, stdev=101.55, samples=20
00:26:29.376    lat (msec)   : 2=2.32%, 4=3.06%, 10=1.61%, 20=0.54%, 50=37.03%
00:26:29.376    lat (msec)   : 100=52.94%, 250=2.49%
00:26:29.376    cpu          : usr=36.34%, sys=0.62%, ctx=1162, majf=0, minf=0
00:26:29.376    IO depths    : 1=1.7%, 2=4.0%, 4=13.1%, 8=69.9%, 16=11.3%, 32=0.0%, >=64=0.0%
00:26:29.376       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.377       complete  : 0=0.0%, 4=90.7%, 8=4.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.377       issued rwts: total=2973,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:29.377       latency   : target=0, window=0, percentile=100.00%, depth=16
00:26:29.377  filename1: (groupid=0, jobs=1): err= 0: pid=91867: Mon Dec 16 06:36:44 2024
00:26:29.377    read: IOPS=245, BW=983KiB/s (1006kB/s)(9856KiB/10029msec)
00:26:29.377      slat (usec): min=4, max=8036, avg=20.43, stdev=220.38
00:26:29.377      clat (msec): min=25, max=140, avg=64.95, stdev=17.39
00:26:29.377       lat (msec): min=25, max=140, avg=64.97, stdev=17.39
00:26:29.377      clat percentiles (msec):
00:26:29.377       |  1.00th=[   32],  5.00th=[   39], 10.00th=[   46], 20.00th=[   52],
00:26:29.377       | 30.00th=[   56], 40.00th=[   60], 50.00th=[   63], 60.00th=[   68],
00:26:29.377       | 70.00th=[   72], 80.00th=[   80], 90.00th=[   86], 95.00th=[   94],
00:26:29.377       | 99.00th=[  120], 99.50th=[  122], 99.90th=[  142], 99.95th=[  142],
00:26:29.377       | 99.99th=[  142]
00:26:29.377     bw (  KiB/s): min=  800, max= 1120, per=3.90%, avg=978.40, stdev=93.95, samples=20
00:26:29.377     iops        : min=  200, max=  280, avg=244.60, stdev=23.49, samples=20
00:26:29.377    lat (msec)   : 50=17.82%, 100=78.65%, 250=3.53%
00:26:29.377    cpu          : usr=39.43%, sys=0.62%, ctx=1136, majf=0, minf=9
00:26:29.377    IO depths    : 1=2.2%, 2=4.7%, 4=13.4%, 8=68.5%, 16=11.2%, 32=0.0%, >=64=0.0%
00:26:29.377       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.377       complete  : 0=0.0%, 4=91.0%, 8=4.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.377       issued rwts: total=2464,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:29.377       latency   : target=0, window=0, percentile=100.00%, depth=16
00:26:29.377  filename1: (groupid=0, jobs=1): err= 0: pid=91868: Mon Dec 16 06:36:44 2024
00:26:29.377    read: IOPS=242, BW=969KiB/s (993kB/s)(9720KiB/10028msec)
00:26:29.377      slat (usec): min=4, max=8024, avg=21.71, stdev=223.28
00:26:29.377      clat (msec): min=28, max=128, avg=65.80, stdev=18.48
00:26:29.377       lat (msec): min=28, max=128, avg=65.82, stdev=18.48
00:26:29.377      clat percentiles (msec):
00:26:29.377       |  1.00th=[   33],  5.00th=[   39], 10.00th=[   42], 20.00th=[   53],
00:26:29.377       | 30.00th=[   57], 40.00th=[   61], 50.00th=[   64], 60.00th=[   67],
00:26:29.377       | 70.00th=[   72], 80.00th=[   81], 90.00th=[   90], 95.00th=[  101],
00:26:29.377       | 99.00th=[  120], 99.50th=[  126], 99.90th=[  129], 99.95th=[  129],
00:26:29.377       | 99.99th=[  129]
00:26:29.377     bw (  KiB/s): min=  596, max= 1280, per=3.86%, avg=967.45, stdev=158.57, samples=20
00:26:29.377     iops        : min=  149, max=  320, avg=241.85, stdev=39.66, samples=20
00:26:29.377    lat (msec)   : 50=17.57%, 100=77.33%, 250=5.10%
00:26:29.377    cpu          : usr=44.10%, sys=0.61%, ctx=1414, majf=0, minf=9
00:26:29.377    IO depths    : 1=2.1%, 2=4.8%, 4=13.4%, 8=68.0%, 16=11.7%, 32=0.0%, >=64=0.0%
00:26:29.377       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.377       complete  : 0=0.0%, 4=91.2%, 8=4.4%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.377       issued rwts: total=2430,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:29.377       latency   : target=0, window=0, percentile=100.00%, depth=16
00:26:29.377  filename1: (groupid=0, jobs=1): err= 0: pid=91869: Mon Dec 16 06:36:44 2024
00:26:29.377    read: IOPS=244, BW=976KiB/s (1000kB/s)(9784KiB/10022msec)
00:26:29.377      slat (usec): min=4, max=12002, avg=26.79, stdev=363.75
00:26:29.377      clat (msec): min=29, max=150, avg=65.42, stdev=18.54
00:26:29.377       lat (msec): min=29, max=150, avg=65.45, stdev=18.54
00:26:29.377      clat percentiles (msec):
00:26:29.377       |  1.00th=[   34],  5.00th=[   39], 10.00th=[   44], 20.00th=[   49],
00:26:29.377       | 30.00th=[   57], 40.00th=[   60], 50.00th=[   63], 60.00th=[   69],
00:26:29.377       | 70.00th=[   73], 80.00th=[   81], 90.00th=[   88], 95.00th=[   99],
00:26:29.377       | 99.00th=[  128], 99.50th=[  134], 99.90th=[  150], 99.95th=[  150],
00:26:29.377       | 99.99th=[  150]
00:26:29.377     bw (  KiB/s): min=  640, max= 1224, per=3.88%, avg=972.05, stdev=174.85, samples=20
00:26:29.377     iops        : min=  160, max=  306, avg=243.00, stdev=43.74, samples=20
00:26:29.377    lat (msec)   : 50=21.22%, 100=74.49%, 250=4.29%
00:26:29.377    cpu          : usr=34.24%, sys=0.42%, ctx=1353, majf=0, minf=9
00:26:29.377    IO depths    : 1=1.2%, 2=2.8%, 4=9.5%, 8=73.7%, 16=12.8%, 32=0.0%, >=64=0.0%
00:26:29.377       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.377       complete  : 0=0.0%, 4=90.1%, 8=5.7%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.377       issued rwts: total=2446,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:29.377       latency   : target=0, window=0, percentile=100.00%, depth=16
00:26:29.377  filename2: (groupid=0, jobs=1): err= 0: pid=91870: Mon Dec 16 06:36:44 2024
00:26:29.377    read: IOPS=287, BW=1150KiB/s (1178kB/s)(11.3MiB/10046msec)
00:26:29.377      slat (usec): min=4, max=4061, avg=16.98, stdev=149.77
00:26:29.377      clat (msec): min=7, max=119, avg=55.40, stdev=16.29
00:26:29.377       lat (msec): min=7, max=119, avg=55.42, stdev=16.29
00:26:29.377      clat percentiles (msec):
00:26:29.377       |  1.00th=[   18],  5.00th=[   33], 10.00th=[   36], 20.00th=[   41],
00:26:29.377       | 30.00th=[   47], 40.00th=[   50], 50.00th=[   56], 60.00th=[   59],
00:26:29.377       | 70.00th=[   62], 80.00th=[   69], 90.00th=[   77], 95.00th=[   84],
00:26:29.377       | 99.00th=[   97], 99.50th=[  111], 99.90th=[  120], 99.95th=[  120],
00:26:29.377       | 99.99th=[  120]
00:26:29.377     bw (  KiB/s): min=  896, max= 1424, per=4.59%, avg=1152.65, stdev=145.18, samples=20
00:26:29.377     iops        : min=  224, max=  356, avg=288.15, stdev=36.30, samples=20
00:26:29.377    lat (msec)   : 10=0.55%, 20=0.55%, 50=39.06%, 100=59.11%, 250=0.73%
00:26:29.377    cpu          : usr=39.79%, sys=0.62%, ctx=1121, majf=0, minf=9
00:26:29.377    IO depths    : 1=0.9%, 2=2.7%, 4=11.0%, 8=72.9%, 16=12.5%, 32=0.0%, >=64=0.0%
00:26:29.377       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.377       complete  : 0=0.0%, 4=90.2%, 8=5.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.377       issued rwts: total=2888,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:29.377       latency   : target=0, window=0, percentile=100.00%, depth=16
00:26:29.377  filename2: (groupid=0, jobs=1): err= 0: pid=91871: Mon Dec 16 06:36:44 2024
00:26:29.377    read: IOPS=240, BW=961KiB/s (984kB/s)(9620KiB/10010msec)
00:26:29.377      slat (usec): min=4, max=8032, avg=19.01, stdev=200.73
00:26:29.377      clat (msec): min=26, max=124, avg=66.44, stdev=17.14
00:26:29.377       lat (msec): min=26, max=124, avg=66.45, stdev=17.14
00:26:29.377      clat percentiles (msec):
00:26:29.377       |  1.00th=[   35],  5.00th=[   41], 10.00th=[   47], 20.00th=[   54],
00:26:29.377       | 30.00th=[   58], 40.00th=[   61], 50.00th=[   63], 60.00th=[   69],
00:26:29.377       | 70.00th=[   72], 80.00th=[   82], 90.00th=[   91], 95.00th=[   96],
00:26:29.377       | 99.00th=[  113], 99.50th=[  125], 99.90th=[  125], 99.95th=[  125],
00:26:29.377       | 99.99th=[  125]
00:26:29.377     bw (  KiB/s): min=  768, max= 1168, per=3.87%, avg=971.74, stdev=88.23, samples=19
00:26:29.377     iops        : min=  192, max=  292, avg=242.89, stdev=22.04, samples=19
00:26:29.377    lat (msec)   : 50=14.68%, 100=80.96%, 250=4.37%
00:26:29.377    cpu          : usr=37.75%, sys=0.49%, ctx=1019, majf=0, minf=9
00:26:29.377    IO depths    : 1=2.0%, 2=4.8%, 4=14.3%, 8=67.7%, 16=11.3%, 32=0.0%, >=64=0.0%
00:26:29.377       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.377       complete  : 0=0.0%, 4=91.4%, 8=3.7%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.377       issued rwts: total=2405,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:29.377       latency   : target=0, window=0, percentile=100.00%, depth=16
00:26:29.377  filename2: (groupid=0, jobs=1): err= 0: pid=91872: Mon Dec 16 06:36:44 2024
00:26:29.377    read: IOPS=245, BW=984KiB/s (1007kB/s)(9860KiB/10024msec)
00:26:29.377      slat (usec): min=4, max=8027, avg=22.22, stdev=258.41
00:26:29.377      clat (msec): min=23, max=156, avg=64.90, stdev=18.70
00:26:29.377       lat (msec): min=23, max=156, avg=64.93, stdev=18.70
00:26:29.377      clat percentiles (msec):
00:26:29.377       |  1.00th=[   28],  5.00th=[   39], 10.00th=[   46], 20.00th=[   52],
00:26:29.377       | 30.00th=[   58], 40.00th=[   60], 50.00th=[   61], 60.00th=[   64],
00:26:29.377       | 70.00th=[   71], 80.00th=[   79], 90.00th=[   85], 95.00th=[   99],
00:26:29.377       | 99.00th=[  124], 99.50th=[  132], 99.90th=[  157], 99.95th=[  157],
00:26:29.377       | 99.99th=[  157]
00:26:29.377     bw (  KiB/s): min=  640, max= 1152, per=3.90%, avg=979.30, stdev=146.26, samples=20
00:26:29.377     iops        : min=  160, max=  288, avg=244.80, stdev=36.58, samples=20
00:26:29.377    lat (msec)   : 50=19.76%, 100=76.11%, 250=4.14%
00:26:29.377    cpu          : usr=37.62%, sys=0.66%, ctx=1053, majf=0, minf=9
00:26:29.377    IO depths    : 1=2.4%, 2=5.4%, 4=14.8%, 8=66.7%, 16=10.8%, 32=0.0%, >=64=0.0%
00:26:29.377       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.377       complete  : 0=0.0%, 4=91.2%, 8=3.8%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.377       issued rwts: total=2465,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:29.377       latency   : target=0, window=0, percentile=100.00%, depth=16
00:26:29.377  filename2: (groupid=0, jobs=1): err= 0: pid=91873: Mon Dec 16 06:36:44 2024
00:26:29.377    read: IOPS=295, BW=1181KiB/s (1209kB/s)(11.6MiB/10062msec)
00:26:29.377      slat (usec): min=4, max=4028, avg=13.64, stdev=101.18
00:26:29.377      clat (msec): min=9, max=140, avg=54.11, stdev=18.26
00:26:29.377       lat (msec): min=9, max=140, avg=54.13, stdev=18.26
00:26:29.377      clat percentiles (msec):
00:26:29.377       |  1.00th=[   11],  5.00th=[   33], 10.00th=[   36], 20.00th=[   40],
00:26:29.377       | 30.00th=[   43], 40.00th=[   47], 50.00th=[   51], 60.00th=[   58],
00:26:29.377       | 70.00th=[   62], 80.00th=[   68], 90.00th=[   75], 95.00th=[   91],
00:26:29.377       | 99.00th=[  108], 99.50th=[  117], 99.90th=[  142], 99.95th=[  142],
00:26:29.377       | 99.99th=[  142]
00:26:29.377     bw (  KiB/s): min=  856, max= 1680, per=4.71%, avg=1182.00, stdev=217.16, samples=20
00:26:29.377     iops        : min=  214, max=  420, avg=295.50, stdev=54.29, samples=20
00:26:29.377    lat (msec)   : 10=0.27%, 20=1.35%, 50=47.43%, 100=48.47%, 250=2.49%
00:26:29.377    cpu          : usr=40.47%, sys=0.58%, ctx=1325, majf=0, minf=9
00:26:29.377    IO depths    : 1=0.7%, 2=1.7%, 4=8.6%, 8=76.2%, 16=12.8%, 32=0.0%, >=64=0.0%
00:26:29.377       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.377       complete  : 0=0.0%, 4=89.5%, 8=5.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.377       issued rwts: total=2971,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:29.377       latency   : target=0, window=0, percentile=100.00%, depth=16
00:26:29.377  filename2: (groupid=0, jobs=1): err= 0: pid=91874: Mon Dec 16 06:36:44 2024
00:26:29.377    read: IOPS=248, BW=992KiB/s (1016kB/s)(9948KiB/10025msec)
00:26:29.377      slat (usec): min=4, max=4032, avg=18.82, stdev=125.46
00:26:29.377      clat (msec): min=25, max=141, avg=64.35, stdev=17.58
00:26:29.377       lat (msec): min=25, max=141, avg=64.37, stdev=17.57
00:26:29.377      clat percentiles (msec):
00:26:29.377       |  1.00th=[   32],  5.00th=[   39], 10.00th=[   43], 20.00th=[   52],
00:26:29.377       | 30.00th=[   56], 40.00th=[   59], 50.00th=[   62], 60.00th=[   65],
00:26:29.377       | 70.00th=[   71], 80.00th=[   81], 90.00th=[   88], 95.00th=[   94],
00:26:29.377       | 99.00th=[  120], 99.50th=[  128], 99.90th=[  142], 99.95th=[  142],
00:26:29.377       | 99.99th=[  142]
00:26:29.377     bw (  KiB/s): min=  640, max= 1328, per=3.94%, avg=988.05, stdev=153.55, samples=20
00:26:29.377     iops        : min=  160, max=  332, avg=247.00, stdev=38.41, samples=20
00:26:29.377    lat (msec)   : 50=18.25%, 100=78.53%, 250=3.22%
00:26:29.377    cpu          : usr=42.81%, sys=0.72%, ctx=1284, majf=0, minf=9
00:26:29.377    IO depths    : 1=1.8%, 2=4.1%, 4=11.9%, 8=70.0%, 16=12.1%, 32=0.0%, >=64=0.0%
00:26:29.378       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.378       complete  : 0=0.0%, 4=90.7%, 8=5.0%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.378       issued rwts: total=2487,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:29.378       latency   : target=0, window=0, percentile=100.00%, depth=16
00:26:29.378  filename2: (groupid=0, jobs=1): err= 0: pid=91875: Mon Dec 16 06:36:44 2024
00:26:29.378    read: IOPS=255, BW=1022KiB/s (1047kB/s)(10.0MiB/10037msec)
00:26:29.378      slat (usec): min=4, max=8017, avg=21.01, stdev=223.61
00:26:29.378      clat (msec): min=24, max=132, avg=62.47, stdev=18.06
00:26:29.378       lat (msec): min=24, max=132, avg=62.49, stdev=18.05
00:26:29.378      clat percentiles (msec):
00:26:29.378       |  1.00th=[   33],  5.00th=[   36], 10.00th=[   40], 20.00th=[   47],
00:26:29.378       | 30.00th=[   54], 40.00th=[   59], 50.00th=[   61], 60.00th=[   64],
00:26:29.378       | 70.00th=[   71], 80.00th=[   79], 90.00th=[   85], 95.00th=[   94],
00:26:29.378       | 99.00th=[  112], 99.50th=[  133], 99.90th=[  133], 99.95th=[  133],
00:26:29.378       | 99.99th=[  133]
00:26:29.378     bw (  KiB/s): min=  728, max= 1456, per=4.06%, avg=1019.25, stdev=194.47, samples=20
00:26:29.378     iops        : min=  182, max=  364, avg=254.80, stdev=48.63, samples=20
00:26:29.378    lat (msec)   : 50=26.90%, 100=70.18%, 250=2.92%
00:26:29.378    cpu          : usr=43.15%, sys=0.52%, ctx=1076, majf=0, minf=9
00:26:29.378    IO depths    : 1=1.4%, 2=3.0%, 4=10.6%, 8=72.6%, 16=12.4%, 32=0.0%, >=64=0.0%
00:26:29.378       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.378       complete  : 0=0.0%, 4=90.1%, 8=5.5%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.378       issued rwts: total=2565,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:29.378       latency   : target=0, window=0, percentile=100.00%, depth=16
00:26:29.378  filename2: (groupid=0, jobs=1): err= 0: pid=91876: Mon Dec 16 06:36:44 2024
00:26:29.378    read: IOPS=266, BW=1065KiB/s (1090kB/s)(10.4MiB/10035msec)
00:26:29.378      slat (usec): min=4, max=8033, avg=18.25, stdev=219.43
00:26:29.378      clat (msec): min=23, max=140, avg=60.01, stdev=17.84
00:26:29.378       lat (msec): min=23, max=140, avg=60.02, stdev=17.84
00:26:29.378      clat percentiles (msec):
00:26:29.378       |  1.00th=[   32],  5.00th=[   36], 10.00th=[   38], 20.00th=[   46],
00:26:29.378       | 30.00th=[   48], 40.00th=[   56], 50.00th=[   60], 60.00th=[   61],
00:26:29.378       | 70.00th=[   68], 80.00th=[   72], 90.00th=[   85], 95.00th=[   94],
00:26:29.378       | 99.00th=[  118], 99.50th=[  121], 99.90th=[  142], 99.95th=[  142],
00:26:29.378       | 99.99th=[  142]
00:26:29.378     bw (  KiB/s): min=  688, max= 1336, per=4.23%, avg=1061.80, stdev=149.52, samples=20
00:26:29.378     iops        : min=  172, max=  334, avg=265.45, stdev=37.38, samples=20
00:26:29.378    lat (msec)   : 50=32.98%, 100=64.99%, 250=2.02%
00:26:29.378    cpu          : usr=34.78%, sys=0.54%, ctx=918, majf=0, minf=9
00:26:29.378    IO depths    : 1=0.8%, 2=2.0%, 4=8.6%, 8=75.4%, 16=13.2%, 32=0.0%, >=64=0.0%
00:26:29.378       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.378       complete  : 0=0.0%, 4=89.9%, 8=6.0%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.378       issued rwts: total=2671,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:29.378       latency   : target=0, window=0, percentile=100.00%, depth=16
00:26:29.378  filename2: (groupid=0, jobs=1): err= 0: pid=91877: Mon Dec 16 06:36:44 2024
00:26:29.378    read: IOPS=284, BW=1136KiB/s (1163kB/s)(11.2MiB/10058msec)
00:26:29.378      slat (usec): min=3, max=4016, avg=13.35, stdev=89.12
00:26:29.378      clat (msec): min=15, max=123, avg=56.20, stdev=17.65
00:26:29.378       lat (msec): min=15, max=123, avg=56.22, stdev=17.65
00:26:29.378      clat percentiles (msec):
00:26:29.378       |  1.00th=[   24],  5.00th=[   32], 10.00th=[   35], 20.00th=[   41],
00:26:29.378       | 30.00th=[   46], 40.00th=[   49], 50.00th=[   56], 60.00th=[   59],
00:26:29.378       | 70.00th=[   65], 80.00th=[   71], 90.00th=[   82], 95.00th=[   89],
00:26:29.378       | 99.00th=[  101], 99.50th=[  106], 99.90th=[  125], 99.95th=[  125],
00:26:29.378       | 99.99th=[  125]
00:26:29.378     bw (  KiB/s): min=  768, max= 1424, per=4.53%, avg=1136.35, stdev=174.42, samples=20
00:26:29.378     iops        : min=  192, max=  356, avg=284.05, stdev=43.59, samples=20
00:26:29.378    lat (msec)   : 20=0.56%, 50=41.83%, 100=56.70%, 250=0.91%
00:26:29.378    cpu          : usr=35.95%, sys=0.61%, ctx=1396, majf=0, minf=9
00:26:29.378    IO depths    : 1=0.6%, 2=1.3%, 4=7.2%, 8=77.3%, 16=13.6%, 32=0.0%, >=64=0.0%
00:26:29.378       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.378       complete  : 0=0.0%, 4=89.4%, 8=6.7%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:29.378       issued rwts: total=2857,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:29.378       latency   : target=0, window=0, percentile=100.00%, depth=16
00:26:29.378  
00:26:29.378  Run status group 0 (all jobs):
00:26:29.378     READ: bw=24.5MiB/s (25.7MB/s), 924KiB/s-1241KiB/s (946kB/s-1271kB/s), io=247MiB (259MB), run=10001-10066msec
00:26:29.378   06:36:44	-- target/dif.sh@113 -- # destroy_subsystems 0 1 2
00:26:29.378   06:36:44	-- target/dif.sh@43 -- # local sub
00:26:29.378   06:36:44	-- target/dif.sh@45 -- # for sub in "$@"
00:26:29.378   06:36:44	-- target/dif.sh@46 -- # destroy_subsystem 0
00:26:29.378   06:36:44	-- target/dif.sh@36 -- # local sub_id=0
00:26:29.378   06:36:44	-- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:26:29.378   06:36:44	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:29.378   06:36:44	-- common/autotest_common.sh@10 -- # set +x
00:26:29.378   06:36:44	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:29.378   06:36:44	-- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0
00:26:29.378   06:36:44	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:29.378   06:36:44	-- common/autotest_common.sh@10 -- # set +x
00:26:29.378   06:36:44	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:29.378   06:36:44	-- target/dif.sh@45 -- # for sub in "$@"
00:26:29.378   06:36:44	-- target/dif.sh@46 -- # destroy_subsystem 1
00:26:29.378   06:36:44	-- target/dif.sh@36 -- # local sub_id=1
00:26:29.378   06:36:44	-- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:26:29.378   06:36:44	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:29.378   06:36:44	-- common/autotest_common.sh@10 -- # set +x
00:26:29.378   06:36:44	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:29.378   06:36:44	-- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1
00:26:29.378   06:36:44	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:29.378   06:36:44	-- common/autotest_common.sh@10 -- # set +x
00:26:29.378   06:36:44	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:29.378   06:36:44	-- target/dif.sh@45 -- # for sub in "$@"
00:26:29.378   06:36:44	-- target/dif.sh@46 -- # destroy_subsystem 2
00:26:29.378   06:36:44	-- target/dif.sh@36 -- # local sub_id=2
00:26:29.378   06:36:44	-- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2
00:26:29.378   06:36:44	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:29.378   06:36:44	-- common/autotest_common.sh@10 -- # set +x
00:26:29.378   06:36:44	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:29.378   06:36:44	-- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2
00:26:29.378   06:36:44	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:29.378   06:36:44	-- common/autotest_common.sh@10 -- # set +x
00:26:29.378   06:36:44	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:29.378   06:36:44	-- target/dif.sh@115 -- # NULL_DIF=1
00:26:29.378   06:36:44	-- target/dif.sh@115 -- # bs=8k,16k,128k
00:26:29.378   06:36:44	-- target/dif.sh@115 -- # numjobs=2
00:26:29.378   06:36:44	-- target/dif.sh@115 -- # iodepth=8
00:26:29.378   06:36:44	-- target/dif.sh@115 -- # runtime=5
00:26:29.378   06:36:44	-- target/dif.sh@115 -- # files=1
00:26:29.378   06:36:44	-- target/dif.sh@117 -- # create_subsystems 0 1
00:26:29.378   06:36:44	-- target/dif.sh@28 -- # local sub
00:26:29.378   06:36:44	-- target/dif.sh@30 -- # for sub in "$@"
00:26:29.378   06:36:44	-- target/dif.sh@31 -- # create_subsystem 0
00:26:29.378   06:36:44	-- target/dif.sh@18 -- # local sub_id=0
00:26:29.378   06:36:44	-- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1
00:26:29.378   06:36:44	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:29.378   06:36:44	-- common/autotest_common.sh@10 -- # set +x
00:26:29.378  bdev_null0
00:26:29.378   06:36:44	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:29.378   06:36:44	-- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host
00:26:29.378   06:36:44	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:29.378   06:36:44	-- common/autotest_common.sh@10 -- # set +x
00:26:29.378   06:36:44	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:29.378   06:36:44	-- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0
00:26:29.378   06:36:44	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:29.378   06:36:44	-- common/autotest_common.sh@10 -- # set +x
00:26:29.378   06:36:44	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:29.378   06:36:44	-- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:26:29.378   06:36:44	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:29.378   06:36:44	-- common/autotest_common.sh@10 -- # set +x
00:26:29.378  [2024-12-16 06:36:44.671802] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:26:29.378   06:36:44	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:29.378   06:36:44	-- target/dif.sh@30 -- # for sub in "$@"
00:26:29.378   06:36:44	-- target/dif.sh@31 -- # create_subsystem 1
00:26:29.378   06:36:44	-- target/dif.sh@18 -- # local sub_id=1
00:26:29.378   06:36:44	-- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1
00:26:29.378   06:36:44	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:29.378   06:36:44	-- common/autotest_common.sh@10 -- # set +x
00:26:29.378  bdev_null1
00:26:29.378   06:36:44	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:29.378   06:36:44	-- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host
00:26:29.378   06:36:44	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:29.378   06:36:44	-- common/autotest_common.sh@10 -- # set +x
00:26:29.378   06:36:44	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:29.378   06:36:44	-- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1
00:26:29.378   06:36:44	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:29.378   06:36:44	-- common/autotest_common.sh@10 -- # set +x
00:26:29.378   06:36:44	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:29.378   06:36:44	-- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:26:29.378   06:36:44	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:29.378   06:36:44	-- common/autotest_common.sh@10 -- # set +x
00:26:29.378   06:36:44	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:29.378   06:36:44	-- target/dif.sh@118 -- # fio /dev/fd/62
00:26:29.378    06:36:44	-- target/dif.sh@118 -- # create_json_sub_conf 0 1
00:26:29.378    06:36:44	-- target/dif.sh@51 -- # gen_nvmf_target_json 0 1
00:26:29.378    06:36:44	-- nvmf/common.sh@520 -- # config=()
00:26:29.378    06:36:44	-- nvmf/common.sh@520 -- # local subsystem config
00:26:29.378    06:36:44	-- nvmf/common.sh@522 -- # for subsystem in "${@:-1}"
00:26:29.378    06:36:44	-- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF
00:26:29.378  {
00:26:29.378    "params": {
00:26:29.378      "name": "Nvme$subsystem",
00:26:29.378      "trtype": "$TEST_TRANSPORT",
00:26:29.379      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:29.379      "adrfam": "ipv4",
00:26:29.379      "trsvcid": "$NVMF_PORT",
00:26:29.379      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:29.379      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:29.379      "hdgst": ${hdgst:-false},
00:26:29.379      "ddgst": ${ddgst:-false}
00:26:29.379    },
00:26:29.379    "method": "bdev_nvme_attach_controller"
00:26:29.379  }
00:26:29.379  EOF
00:26:29.379  )")
00:26:29.379   06:36:44	-- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:26:29.379    06:36:44	-- target/dif.sh@82 -- # gen_fio_conf
00:26:29.379   06:36:44	-- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:26:29.379    06:36:44	-- target/dif.sh@54 -- # local file
00:26:29.379   06:36:44	-- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio
00:26:29.379    06:36:44	-- target/dif.sh@56 -- # cat
00:26:29.379     06:36:44	-- nvmf/common.sh@542 -- # cat
00:26:29.379   06:36:44	-- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:26:29.379   06:36:44	-- common/autotest_common.sh@1328 -- # local sanitizers
00:26:29.379   06:36:44	-- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:26:29.379   06:36:44	-- common/autotest_common.sh@1330 -- # shift
00:26:29.379   06:36:44	-- common/autotest_common.sh@1332 -- # local asan_lib=
00:26:29.379   06:36:44	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:26:29.379    06:36:44	-- target/dif.sh@72 -- # (( file = 1 ))
00:26:29.379    06:36:44	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:26:29.379    06:36:44	-- target/dif.sh@72 -- # (( file <= files ))
00:26:29.379    06:36:44	-- target/dif.sh@73 -- # cat
00:26:29.379    06:36:44	-- nvmf/common.sh@522 -- # for subsystem in "${@:-1}"
00:26:29.379    06:36:44	-- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF
00:26:29.379  {
00:26:29.379    "params": {
00:26:29.379      "name": "Nvme$subsystem",
00:26:29.379      "trtype": "$TEST_TRANSPORT",
00:26:29.379      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:29.379      "adrfam": "ipv4",
00:26:29.379      "trsvcid": "$NVMF_PORT",
00:26:29.379      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:29.379      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:29.379      "hdgst": ${hdgst:-false},
00:26:29.379      "ddgst": ${ddgst:-false}
00:26:29.379    },
00:26:29.379    "method": "bdev_nvme_attach_controller"
00:26:29.379  }
00:26:29.379  EOF
00:26:29.379  )")
00:26:29.379    06:36:44	-- common/autotest_common.sh@1334 -- # grep libasan
00:26:29.379    06:36:44	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:26:29.379     06:36:44	-- nvmf/common.sh@542 -- # cat
00:26:29.379    06:36:44	-- target/dif.sh@72 -- # (( file++ ))
00:26:29.379    06:36:44	-- target/dif.sh@72 -- # (( file <= files ))
00:26:29.379    06:36:44	-- nvmf/common.sh@544 -- # jq .
00:26:29.379     06:36:44	-- nvmf/common.sh@545 -- # IFS=,
00:26:29.379     06:36:44	-- nvmf/common.sh@546 -- # printf '%s\n' '{
00:26:29.379    "params": {
00:26:29.379      "name": "Nvme0",
00:26:29.379      "trtype": "tcp",
00:26:29.379      "traddr": "10.0.0.2",
00:26:29.379      "adrfam": "ipv4",
00:26:29.379      "trsvcid": "4420",
00:26:29.379      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:26:29.379      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:26:29.379      "hdgst": false,
00:26:29.379      "ddgst": false
00:26:29.379    },
00:26:29.379    "method": "bdev_nvme_attach_controller"
00:26:29.379  },{
00:26:29.379    "params": {
00:26:29.379      "name": "Nvme1",
00:26:29.379      "trtype": "tcp",
00:26:29.379      "traddr": "10.0.0.2",
00:26:29.379      "adrfam": "ipv4",
00:26:29.379      "trsvcid": "4420",
00:26:29.379      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:26:29.379      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:26:29.379      "hdgst": false,
00:26:29.379      "ddgst": false
00:26:29.379    },
00:26:29.379    "method": "bdev_nvme_attach_controller"
00:26:29.379  }'
00:26:29.379   06:36:44	-- common/autotest_common.sh@1334 -- # asan_lib=
00:26:29.379   06:36:44	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:26:29.379   06:36:44	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:26:29.379    06:36:44	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:26:29.379    06:36:44	-- common/autotest_common.sh@1334 -- # grep libclang_rt.asan
00:26:29.379    06:36:44	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:26:29.379   06:36:44	-- common/autotest_common.sh@1334 -- # asan_lib=
00:26:29.379   06:36:44	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:26:29.379   06:36:44	-- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:26:29.379   06:36:44	-- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:26:29.379  filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8
00:26:29.379  ...
00:26:29.379  filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8
00:26:29.379  ...
00:26:29.379  fio-3.35
00:26:29.379  Starting 4 threads
00:26:29.379  [2024-12-16 06:36:45.463044] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another.
00:26:29.379  [2024-12-16 06:36:45.463293] rpc.c:  90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock
00:26:34.649  
00:26:34.649  filename0: (groupid=0, jobs=1): err= 0: pid=92013: Mon Dec 16 06:36:50 2024
00:26:34.649    read: IOPS=2183, BW=17.1MiB/s (17.9MB/s)(85.3MiB/5001msec)
00:26:34.649      slat (nsec): min=6070, max=94737, avg=19999.88, stdev=8953.17
00:26:34.649      clat (usec): min=969, max=10922, avg=3572.75, stdev=501.58
00:26:34.649       lat (usec): min=987, max=10929, avg=3592.75, stdev=500.86
00:26:34.649      clat percentiles (usec):
00:26:34.649       |  1.00th=[ 2507],  5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3392],
00:26:34.649       | 30.00th=[ 3425], 40.00th=[ 3458], 50.00th=[ 3490], 60.00th=[ 3523],
00:26:34.649       | 70.00th=[ 3556], 80.00th=[ 3621], 90.00th=[ 3752], 95.00th=[ 3982],
00:26:34.649       | 99.00th=[ 5473], 99.50th=[ 5866], 99.90th=[ 8717], 99.95th=[10421],
00:26:34.649       | 99.99th=[10421]
00:26:34.649     bw (  KiB/s): min=15856, max=17936, per=24.94%, avg=17477.33, stdev=651.01, samples=9
00:26:34.649     iops        : min= 1982, max= 2242, avg=2184.67, stdev=81.38, samples=9
00:26:34.649    lat (usec)   : 1000=0.01%
00:26:34.649    lat (msec)   : 2=0.24%, 4=94.80%, 10=4.88%, 20=0.07%
00:26:34.649    cpu          : usr=94.80%, sys=3.92%, ctx=6, majf=0, minf=9
00:26:34.649    IO depths    : 1=5.1%, 2=25.0%, 4=50.0%, 8=19.9%, 16=0.0%, 32=0.0%, >=64=0.0%
00:26:34.649       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:34.649       complete  : 0=0.0%, 4=89.6%, 8=10.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:34.649       issued rwts: total=10920,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:34.649       latency   : target=0, window=0, percentile=100.00%, depth=8
00:26:34.649  filename0: (groupid=0, jobs=1): err= 0: pid=92014: Mon Dec 16 06:36:50 2024
00:26:34.649    read: IOPS=2206, BW=17.2MiB/s (18.1MB/s)(86.2MiB/5002msec)
00:26:34.649      slat (nsec): min=6120, max=60757, avg=8378.22, stdev=4061.81
00:26:34.649      clat (usec): min=986, max=9850, avg=3588.24, stdev=444.35
00:26:34.649       lat (usec): min=993, max=9857, avg=3596.61, stdev=444.29
00:26:34.649      clat percentiles (usec):
00:26:34.649       |  1.00th=[ 2114],  5.00th=[ 3359], 10.00th=[ 3425], 20.00th=[ 3458],
00:26:34.649       | 30.00th=[ 3490], 40.00th=[ 3523], 50.00th=[ 3523], 60.00th=[ 3556],
00:26:34.649       | 70.00th=[ 3589], 80.00th=[ 3654], 90.00th=[ 3752], 95.00th=[ 3916],
00:26:34.649       | 99.00th=[ 5407], 99.50th=[ 5473], 99.90th=[ 7439], 99.95th=[ 8291],
00:26:34.649       | 99.99th=[ 8979]
00:26:34.649     bw (  KiB/s): min=16064, max=18048, per=25.24%, avg=17689.22, stdev=627.39, samples=9
00:26:34.649     iops        : min= 2008, max= 2256, avg=2211.11, stdev=78.42, samples=9
00:26:34.649    lat (usec)   : 1000=0.04%
00:26:34.649    lat (msec)   : 2=0.91%, 4=94.68%, 10=4.38%
00:26:34.649    cpu          : usr=94.52%, sys=4.18%, ctx=7, majf=0, minf=0
00:26:34.649    IO depths    : 1=6.3%, 2=18.2%, 4=56.6%, 8=18.9%, 16=0.0%, 32=0.0%, >=64=0.0%
00:26:34.649       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:34.649       complete  : 0=0.0%, 4=89.7%, 8=10.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:34.649       issued rwts: total=11036,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:34.649       latency   : target=0, window=0, percentile=100.00%, depth=8
00:26:34.649  filename1: (groupid=0, jobs=1): err= 0: pid=92015: Mon Dec 16 06:36:50 2024
00:26:34.649    read: IOPS=2183, BW=17.1MiB/s (17.9MB/s)(85.3MiB/5001msec)
00:26:34.649      slat (nsec): min=5938, max=91058, avg=20338.76, stdev=7859.00
00:26:34.649      clat (usec): min=1429, max=10922, avg=3581.32, stdev=431.28
00:26:34.649       lat (usec): min=1439, max=10938, avg=3601.66, stdev=430.40
00:26:34.649      clat percentiles (usec):
00:26:34.649       |  1.00th=[ 3261],  5.00th=[ 3359], 10.00th=[ 3392], 20.00th=[ 3425],
00:26:34.649       | 30.00th=[ 3458], 40.00th=[ 3458], 50.00th=[ 3490], 60.00th=[ 3523],
00:26:34.649       | 70.00th=[ 3556], 80.00th=[ 3621], 90.00th=[ 3752], 95.00th=[ 3949],
00:26:34.649       | 99.00th=[ 5407], 99.50th=[ 5473], 99.90th=[ 8029], 99.95th=[10421],
00:26:34.649       | 99.99th=[10421]
00:26:34.649     bw (  KiB/s): min=15872, max=17920, per=24.94%, avg=17479.11, stdev=645.58, samples=9
00:26:34.649     iops        : min= 1984, max= 2240, avg=2184.89, stdev=80.70, samples=9
00:26:34.649    lat (msec)   : 2=0.14%, 4=95.43%, 10=4.36%, 20=0.07%
00:26:34.649    cpu          : usr=94.86%, sys=3.86%, ctx=12, majf=0, minf=9
00:26:34.649    IO depths    : 1=4.6%, 2=25.0%, 4=50.0%, 8=20.4%, 16=0.0%, 32=0.0%, >=64=0.0%
00:26:34.649       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:34.649       complete  : 0=0.0%, 4=89.6%, 8=10.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:34.649       issued rwts: total=10920,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:34.649       latency   : target=0, window=0, percentile=100.00%, depth=8
00:26:34.649  filename1: (groupid=0, jobs=1): err= 0: pid=92016: Mon Dec 16 06:36:50 2024
00:26:34.649    read: IOPS=2188, BW=17.1MiB/s (17.9MB/s)(85.5MiB/5001msec)
00:26:34.649      slat (nsec): min=6159, max=72477, avg=11282.96, stdev=8027.10
00:26:34.649      clat (usec): min=1230, max=10917, avg=3611.66, stdev=438.70
00:26:34.649       lat (usec): min=1237, max=10965, avg=3622.94, stdev=438.26
00:26:34.649      clat percentiles (usec):
00:26:34.649       |  1.00th=[ 2802],  5.00th=[ 3359], 10.00th=[ 3392], 20.00th=[ 3458],
00:26:34.649       | 30.00th=[ 3490], 40.00th=[ 3523], 50.00th=[ 3556], 60.00th=[ 3589],
00:26:34.649       | 70.00th=[ 3621], 80.00th=[ 3654], 90.00th=[ 3785], 95.00th=[ 4047],
00:26:34.649       | 99.00th=[ 5473], 99.50th=[ 5538], 99.90th=[ 8094], 99.95th=[10421],
00:26:34.649       | 99.99th=[10421]
00:26:34.649     bw (  KiB/s): min=15872, max=17968, per=25.01%, avg=17528.89, stdev=656.25, samples=9
00:26:34.649     iops        : min= 1984, max= 2246, avg=2191.11, stdev=82.03, samples=9
00:26:34.649    lat (msec)   : 2=0.36%, 4=94.27%, 10=5.30%, 20=0.07%
00:26:34.649    cpu          : usr=95.00%, sys=3.72%, ctx=10, majf=0, minf=0
00:26:34.649    IO depths    : 1=4.8%, 2=11.7%, 4=63.1%, 8=20.4%, 16=0.0%, 32=0.0%, >=64=0.0%
00:26:34.649       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:34.649       complete  : 0=0.0%, 4=89.8%, 8=10.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:34.649       issued rwts: total=10943,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:34.649       latency   : target=0, window=0, percentile=100.00%, depth=8
00:26:34.649  
00:26:34.649  Run status group 0 (all jobs):
00:26:34.649     READ: bw=68.4MiB/s (71.8MB/s), 17.1MiB/s-17.2MiB/s (17.9MB/s-18.1MB/s), io=342MiB (359MB), run=5001-5002msec
00:26:34.649   06:36:50	-- target/dif.sh@119 -- # destroy_subsystems 0 1
00:26:34.649   06:36:50	-- target/dif.sh@43 -- # local sub
00:26:34.649   06:36:50	-- target/dif.sh@45 -- # for sub in "$@"
00:26:34.649   06:36:50	-- target/dif.sh@46 -- # destroy_subsystem 0
00:26:34.649   06:36:50	-- target/dif.sh@36 -- # local sub_id=0
00:26:34.649   06:36:50	-- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:26:34.649   06:36:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:34.649   06:36:50	-- common/autotest_common.sh@10 -- # set +x
00:26:34.649   06:36:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:34.649   06:36:50	-- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0
00:26:34.649   06:36:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:34.649   06:36:50	-- common/autotest_common.sh@10 -- # set +x
00:26:34.649   06:36:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:34.649   06:36:50	-- target/dif.sh@45 -- # for sub in "$@"
00:26:34.649   06:36:50	-- target/dif.sh@46 -- # destroy_subsystem 1
00:26:34.649   06:36:50	-- target/dif.sh@36 -- # local sub_id=1
00:26:34.649   06:36:50	-- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:26:34.649   06:36:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:34.649   06:36:50	-- common/autotest_common.sh@10 -- # set +x
00:26:34.649   06:36:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:34.649   06:36:50	-- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1
00:26:34.649   06:36:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:34.649   06:36:50	-- common/autotest_common.sh@10 -- # set +x
00:26:34.649  ************************************
00:26:34.649  END TEST fio_dif_rand_params
00:26:34.649  ************************************
00:26:34.649   06:36:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:34.649  
00:26:34.649  real	0m23.900s
00:26:34.649  user	2m8.052s
00:26:34.649  sys	0m3.737s
00:26:34.649   06:36:50	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:34.649   06:36:50	-- common/autotest_common.sh@10 -- # set +x
00:26:34.649   06:36:50	-- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest
00:26:34.649   06:36:50	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:34.649   06:36:50	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:34.649   06:36:50	-- common/autotest_common.sh@10 -- # set +x
00:26:34.649  ************************************
00:26:34.649  START TEST fio_dif_digest
00:26:34.649  ************************************
00:26:34.649   06:36:50	-- common/autotest_common.sh@1114 -- # fio_dif_digest
00:26:34.649   06:36:50	-- target/dif.sh@123 -- # local NULL_DIF
00:26:34.649   06:36:50	-- target/dif.sh@124 -- # local bs numjobs runtime iodepth files
00:26:34.649   06:36:50	-- target/dif.sh@125 -- # local hdgst ddgst
00:26:34.649   06:36:50	-- target/dif.sh@127 -- # NULL_DIF=3
00:26:34.649   06:36:50	-- target/dif.sh@127 -- # bs=128k,128k,128k
00:26:34.649   06:36:50	-- target/dif.sh@127 -- # numjobs=3
00:26:34.649   06:36:50	-- target/dif.sh@127 -- # iodepth=3
00:26:34.649   06:36:50	-- target/dif.sh@127 -- # runtime=10
00:26:34.649   06:36:50	-- target/dif.sh@128 -- # hdgst=true
00:26:34.649   06:36:50	-- target/dif.sh@128 -- # ddgst=true
00:26:34.649   06:36:50	-- target/dif.sh@130 -- # create_subsystems 0
00:26:34.649   06:36:50	-- target/dif.sh@28 -- # local sub
00:26:34.649   06:36:50	-- target/dif.sh@30 -- # for sub in "$@"
00:26:34.649   06:36:50	-- target/dif.sh@31 -- # create_subsystem 0
00:26:34.649   06:36:50	-- target/dif.sh@18 -- # local sub_id=0
00:26:34.649   06:36:50	-- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3
00:26:34.649   06:36:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:34.649   06:36:50	-- common/autotest_common.sh@10 -- # set +x
00:26:34.649  bdev_null0
00:26:34.649   06:36:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:34.649   06:36:50	-- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host
00:26:34.649   06:36:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:34.649   06:36:50	-- common/autotest_common.sh@10 -- # set +x
00:26:34.649   06:36:51	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:34.649   06:36:51	-- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0
00:26:34.649   06:36:51	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:34.649   06:36:51	-- common/autotest_common.sh@10 -- # set +x
00:26:34.649   06:36:51	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:34.649   06:36:51	-- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:26:34.649   06:36:51	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:34.650   06:36:51	-- common/autotest_common.sh@10 -- # set +x
00:26:34.650  [2024-12-16 06:36:51.019903] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:26:34.650   06:36:51	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:34.650   06:36:51	-- target/dif.sh@131 -- # fio /dev/fd/62
00:26:34.650    06:36:51	-- target/dif.sh@131 -- # create_json_sub_conf 0
00:26:34.650    06:36:51	-- target/dif.sh@51 -- # gen_nvmf_target_json 0
00:26:34.650    06:36:51	-- nvmf/common.sh@520 -- # config=()
00:26:34.650    06:36:51	-- nvmf/common.sh@520 -- # local subsystem config
00:26:34.650    06:36:51	-- nvmf/common.sh@522 -- # for subsystem in "${@:-1}"
00:26:34.650    06:36:51	-- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF
00:26:34.650  {
00:26:34.650    "params": {
00:26:34.650      "name": "Nvme$subsystem",
00:26:34.650      "trtype": "$TEST_TRANSPORT",
00:26:34.650      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:34.650      "adrfam": "ipv4",
00:26:34.650      "trsvcid": "$NVMF_PORT",
00:26:34.650      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:34.650      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:34.650      "hdgst": ${hdgst:-false},
00:26:34.650      "ddgst": ${ddgst:-false}
00:26:34.650    },
00:26:34.650    "method": "bdev_nvme_attach_controller"
00:26:34.650  }
00:26:34.650  EOF
00:26:34.650  )")
00:26:34.650   06:36:51	-- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:26:34.650    06:36:51	-- target/dif.sh@82 -- # gen_fio_conf
00:26:34.650   06:36:51	-- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:26:34.650   06:36:51	-- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio
00:26:34.650    06:36:51	-- target/dif.sh@54 -- # local file
00:26:34.650    06:36:51	-- target/dif.sh@56 -- # cat
00:26:34.650     06:36:51	-- nvmf/common.sh@542 -- # cat
00:26:34.650   06:36:51	-- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:26:34.650   06:36:51	-- common/autotest_common.sh@1328 -- # local sanitizers
00:26:34.650   06:36:51	-- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:26:34.650   06:36:51	-- common/autotest_common.sh@1330 -- # shift
00:26:34.650   06:36:51	-- common/autotest_common.sh@1332 -- # local asan_lib=
00:26:34.650   06:36:51	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:26:34.650    06:36:51	-- nvmf/common.sh@544 -- # jq .
00:26:34.650    06:36:51	-- target/dif.sh@72 -- # (( file = 1 ))
00:26:34.650    06:36:51	-- target/dif.sh@72 -- # (( file <= files ))
00:26:34.650    06:36:51	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:26:34.650    06:36:51	-- common/autotest_common.sh@1334 -- # grep libasan
00:26:34.650    06:36:51	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:26:34.650     06:36:51	-- nvmf/common.sh@545 -- # IFS=,
00:26:34.650     06:36:51	-- nvmf/common.sh@546 -- # printf '%s\n' '{
00:26:34.650    "params": {
00:26:34.650      "name": "Nvme0",
00:26:34.650      "trtype": "tcp",
00:26:34.650      "traddr": "10.0.0.2",
00:26:34.650      "adrfam": "ipv4",
00:26:34.650      "trsvcid": "4420",
00:26:34.650      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:26:34.650      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:26:34.650      "hdgst": true,
00:26:34.650      "ddgst": true
00:26:34.650    },
00:26:34.650    "method": "bdev_nvme_attach_controller"
00:26:34.650  }'
00:26:34.650   06:36:51	-- common/autotest_common.sh@1334 -- # asan_lib=
00:26:34.650   06:36:51	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:26:34.650   06:36:51	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:26:34.650    06:36:51	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:26:34.650    06:36:51	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:26:34.650    06:36:51	-- common/autotest_common.sh@1334 -- # grep libclang_rt.asan
00:26:34.650   06:36:51	-- common/autotest_common.sh@1334 -- # asan_lib=
00:26:34.650   06:36:51	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:26:34.650   06:36:51	-- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:26:34.650   06:36:51	-- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:26:34.650  filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3
00:26:34.650  ...
00:26:34.650  fio-3.35
00:26:34.650  Starting 3 threads
00:26:34.650  [2024-12-16 06:36:51.578766] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another.
00:26:34.650  [2024-12-16 06:36:51.579211] rpc.c:  90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock
00:26:46.855  
00:26:46.855  filename0: (groupid=0, jobs=1): err= 0: pid=92122: Mon Dec 16 06:37:01 2024
00:26:46.855    read: IOPS=256, BW=32.0MiB/s (33.6MB/s)(320MiB/10004msec)
00:26:46.855      slat (nsec): min=6555, max=72652, avg=18182.26, stdev=7001.88
00:26:46.855      clat (usec): min=3889, max=16692, avg=11685.97, stdev=2121.96
00:26:46.855       lat (usec): min=3910, max=16712, avg=11704.16, stdev=2122.46
00:26:46.855      clat percentiles (usec):
00:26:46.855       |  1.00th=[ 6783],  5.00th=[ 7242], 10.00th=[ 7701], 20.00th=[10552],
00:26:46.855       | 30.00th=[11469], 40.00th=[11994], 50.00th=[12387], 60.00th=[12649],
00:26:46.855       | 70.00th=[12911], 80.00th=[13304], 90.00th=[13698], 95.00th=[14091],
00:26:46.855       | 99.00th=[14746], 99.50th=[15008], 99.90th=[15795], 99.95th=[15795],
00:26:46.855       | 99.99th=[16712]
00:26:46.855     bw (  KiB/s): min=30208, max=37632, per=34.48%, avg=32714.11, stdev=1870.69, samples=19
00:26:46.855     iops        : min=  236, max=  294, avg=255.58, stdev=14.61, samples=19
00:26:46.855    lat (msec)   : 4=0.04%, 10=18.92%, 20=81.04%
00:26:46.855    cpu          : usr=94.57%, sys=3.90%, ctx=7, majf=0, minf=9
00:26:46.855    IO depths    : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:26:46.855       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:46.855       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:46.855       issued rwts: total=2563,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:46.855       latency   : target=0, window=0, percentile=100.00%, depth=3
00:26:46.855  filename0: (groupid=0, jobs=1): err= 0: pid=92123: Mon Dec 16 06:37:01 2024
00:26:46.855    read: IOPS=226, BW=28.3MiB/s (29.7MB/s)(284MiB/10044msec)
00:26:46.855      slat (nsec): min=6178, max=72380, avg=15687.25, stdev=5317.65
00:26:46.855      clat (usec): min=7721, max=50011, avg=13219.48, stdev=2398.54
00:26:46.855       lat (usec): min=7733, max=50023, avg=13235.16, stdev=2398.17
00:26:46.855      clat percentiles (usec):
00:26:46.855       |  1.00th=[ 8094],  5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[12387],
00:26:46.855       | 30.00th=[13435], 40.00th=[13698], 50.00th=[13960], 60.00th=[14091],
00:26:46.855       | 70.00th=[14353], 80.00th=[14615], 90.00th=[15008], 95.00th=[15401],
00:26:46.855       | 99.00th=[16057], 99.50th=[16188], 99.90th=[18744], 99.95th=[46400],
00:26:46.855       | 99.99th=[50070]
00:26:46.855     bw (  KiB/s): min=26624, max=32768, per=30.57%, avg=29013.50, stdev=1460.29, samples=20
00:26:46.855     iops        : min=  208, max=  256, avg=226.65, stdev=11.41, samples=20
00:26:46.855    lat (msec)   : 10=15.97%, 20=83.94%, 50=0.04%, 100=0.04%
00:26:46.855    cpu          : usr=93.17%, sys=4.87%, ctx=12, majf=0, minf=9
00:26:46.855    IO depths    : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:26:46.855       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:46.855       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:46.855       issued rwts: total=2273,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:46.855       latency   : target=0, window=0, percentile=100.00%, depth=3
00:26:46.855  filename0: (groupid=0, jobs=1): err= 0: pid=92124: Mon Dec 16 06:37:01 2024
00:26:46.855    read: IOPS=260, BW=32.6MiB/s (34.2MB/s)(326MiB/10007msec)
00:26:46.855      slat (nsec): min=6134, max=58198, avg=13984.88, stdev=6064.89
00:26:46.855      clat (usec): min=6683, max=52282, avg=11483.24, stdev=7458.64
00:26:46.855       lat (usec): min=6703, max=52300, avg=11497.22, stdev=7458.63
00:26:46.855      clat percentiles (usec):
00:26:46.855       |  1.00th=[ 8455],  5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503],
00:26:46.855       | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290],
00:26:46.855       | 70.00th=[10552], 80.00th=[10683], 90.00th=[11207], 95.00th=[11600],
00:26:46.855       | 99.00th=[51119], 99.50th=[51643], 99.90th=[52167], 99.95th=[52167],
00:26:46.855       | 99.99th=[52167]
00:26:46.855     bw (  KiB/s): min=26368, max=38656, per=35.45%, avg=33643.79, stdev=3088.92, samples=19
00:26:46.855     iops        : min=  206, max=  302, avg=262.84, stdev=24.13, samples=19
00:26:46.855    lat (msec)   : 10=43.30%, 20=53.26%, 50=0.54%, 100=2.91%
00:26:46.855    cpu          : usr=93.47%, sys=4.89%, ctx=682, majf=0, minf=9
00:26:46.855    IO depths    : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:26:46.855       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:46.855       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:46.855       issued rwts: total=2610,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:46.855       latency   : target=0, window=0, percentile=100.00%, depth=3
00:26:46.855  
00:26:46.855  Run status group 0 (all jobs):
00:26:46.855     READ: bw=92.7MiB/s (97.2MB/s), 28.3MiB/s-32.6MiB/s (29.7MB/s-34.2MB/s), io=931MiB (976MB), run=10004-10044msec
00:26:46.855   06:37:01	-- target/dif.sh@132 -- # destroy_subsystems 0
00:26:46.855   06:37:01	-- target/dif.sh@43 -- # local sub
00:26:46.855   06:37:01	-- target/dif.sh@45 -- # for sub in "$@"
00:26:46.855   06:37:01	-- target/dif.sh@46 -- # destroy_subsystem 0
00:26:46.855   06:37:01	-- target/dif.sh@36 -- # local sub_id=0
00:26:46.855   06:37:01	-- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:26:46.855   06:37:01	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:46.855   06:37:01	-- common/autotest_common.sh@10 -- # set +x
00:26:46.855   06:37:01	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:46.855   06:37:01	-- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0
00:26:46.856   06:37:01	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:46.856   06:37:01	-- common/autotest_common.sh@10 -- # set +x
00:26:46.856  ************************************
00:26:46.856  END TEST fio_dif_digest
00:26:46.856  ************************************
00:26:46.856   06:37:01	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:46.856  
00:26:46.856  real	0m11.013s
00:26:46.856  user	0m28.835s
00:26:46.856  sys	0m1.634s
00:26:46.856   06:37:01	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:46.856   06:37:01	-- common/autotest_common.sh@10 -- # set +x
00:26:46.856   06:37:02	-- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT
00:26:46.856   06:37:02	-- target/dif.sh@147 -- # nvmftestfini
00:26:46.856   06:37:02	-- nvmf/common.sh@476 -- # nvmfcleanup
00:26:46.856   06:37:02	-- nvmf/common.sh@116 -- # sync
00:26:46.856   06:37:02	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:26:46.856   06:37:02	-- nvmf/common.sh@119 -- # set +e
00:26:46.856   06:37:02	-- nvmf/common.sh@120 -- # for i in {1..20}
00:26:46.856   06:37:02	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:26:46.856  rmmod nvme_tcp
00:26:46.856  rmmod nvme_fabrics
00:26:46.856  rmmod nvme_keyring
00:26:46.856   06:37:02	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:26:46.856   06:37:02	-- nvmf/common.sh@123 -- # set -e
00:26:46.856   06:37:02	-- nvmf/common.sh@124 -- # return 0
00:26:46.856   06:37:02	-- nvmf/common.sh@477 -- # '[' -n 91350 ']'
00:26:46.856   06:37:02	-- nvmf/common.sh@478 -- # killprocess 91350
00:26:46.856   06:37:02	-- common/autotest_common.sh@936 -- # '[' -z 91350 ']'
00:26:46.856   06:37:02	-- common/autotest_common.sh@940 -- # kill -0 91350
00:26:46.856    06:37:02	-- common/autotest_common.sh@941 -- # uname
00:26:46.856   06:37:02	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:26:46.856    06:37:02	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91350
00:26:46.856  killing process with pid 91350
00:26:46.856   06:37:02	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:26:46.856   06:37:02	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:26:46.856   06:37:02	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 91350'
00:26:46.856   06:37:02	-- common/autotest_common.sh@955 -- # kill 91350
00:26:46.856   06:37:02	-- common/autotest_common.sh@960 -- # wait 91350
00:26:46.856   06:37:02	-- nvmf/common.sh@480 -- # '[' iso == iso ']'
00:26:46.856   06:37:02	-- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:26:46.856  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:26:46.856  Waiting for block devices as requested
00:26:46.856  0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme
00:26:46.856  0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme
00:26:46.856   06:37:03	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:26:46.856   06:37:03	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:26:46.856   06:37:03	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:26:46.856   06:37:03	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:26:46.856   06:37:03	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:46.856   06:37:03	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null'
00:26:46.856    06:37:03	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:46.856   06:37:03	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:26:46.856  ************************************
00:26:46.856  END TEST nvmf_dif
00:26:46.856  ************************************
00:26:46.856  
00:26:46.856  real	1m0.635s
00:26:46.856  user	3m54.204s
00:26:46.856  sys	0m13.276s
00:26:46.856   06:37:03	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:46.856   06:37:03	-- common/autotest_common.sh@10 -- # set +x
00:26:46.856   06:37:03	-- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh
00:26:46.856   06:37:03	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:46.856   06:37:03	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:46.856   06:37:03	-- common/autotest_common.sh@10 -- # set +x
00:26:46.856  ************************************
00:26:46.856  START TEST nvmf_abort_qd_sizes
00:26:46.856  ************************************
00:26:46.856   06:37:03	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh
00:26:46.856  * Looking for test storage...
00:26:46.856  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:26:46.856    06:37:03	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:26:46.856     06:37:03	-- common/autotest_common.sh@1690 -- # lcov --version
00:26:46.856     06:37:03	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:26:46.856    06:37:03	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:26:46.856    06:37:03	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:26:46.856    06:37:03	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:26:46.856    06:37:03	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:26:46.856    06:37:03	-- scripts/common.sh@335 -- # IFS=.-:
00:26:46.856    06:37:03	-- scripts/common.sh@335 -- # read -ra ver1
00:26:46.856    06:37:03	-- scripts/common.sh@336 -- # IFS=.-:
00:26:46.856    06:37:03	-- scripts/common.sh@336 -- # read -ra ver2
00:26:46.856    06:37:03	-- scripts/common.sh@337 -- # local 'op=<'
00:26:46.856    06:37:03	-- scripts/common.sh@339 -- # ver1_l=2
00:26:46.856    06:37:03	-- scripts/common.sh@340 -- # ver2_l=1
00:26:46.856    06:37:03	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:26:46.856    06:37:03	-- scripts/common.sh@343 -- # case "$op" in
00:26:46.856    06:37:03	-- scripts/common.sh@344 -- # : 1
00:26:46.856    06:37:03	-- scripts/common.sh@363 -- # (( v = 0 ))
00:26:46.856    06:37:03	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:26:46.856     06:37:03	-- scripts/common.sh@364 -- # decimal 1
00:26:46.856     06:37:03	-- scripts/common.sh@352 -- # local d=1
00:26:46.856     06:37:03	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:26:46.856     06:37:03	-- scripts/common.sh@354 -- # echo 1
00:26:46.856    06:37:03	-- scripts/common.sh@364 -- # ver1[v]=1
00:26:46.856     06:37:03	-- scripts/common.sh@365 -- # decimal 2
00:26:46.856     06:37:03	-- scripts/common.sh@352 -- # local d=2
00:26:46.856     06:37:03	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:26:46.856     06:37:03	-- scripts/common.sh@354 -- # echo 2
00:26:46.856    06:37:03	-- scripts/common.sh@365 -- # ver2[v]=2
00:26:46.856    06:37:03	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:26:46.856    06:37:03	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:26:46.856    06:37:03	-- scripts/common.sh@367 -- # return 0
00:26:46.856    06:37:03	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:26:46.856    06:37:03	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:26:46.856  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:46.856  		--rc genhtml_branch_coverage=1
00:26:46.856  		--rc genhtml_function_coverage=1
00:26:46.856  		--rc genhtml_legend=1
00:26:46.856  		--rc geninfo_all_blocks=1
00:26:46.856  		--rc geninfo_unexecuted_blocks=1
00:26:46.856  		
00:26:46.856  		'
00:26:46.856    06:37:03	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:26:46.856  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:46.856  		--rc genhtml_branch_coverage=1
00:26:46.856  		--rc genhtml_function_coverage=1
00:26:46.856  		--rc genhtml_legend=1
00:26:46.856  		--rc geninfo_all_blocks=1
00:26:46.856  		--rc geninfo_unexecuted_blocks=1
00:26:46.856  		
00:26:46.856  		'
00:26:46.856    06:37:03	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:26:46.856  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:46.856  		--rc genhtml_branch_coverage=1
00:26:46.856  		--rc genhtml_function_coverage=1
00:26:46.856  		--rc genhtml_legend=1
00:26:46.856  		--rc geninfo_all_blocks=1
00:26:46.856  		--rc geninfo_unexecuted_blocks=1
00:26:46.856  		
00:26:46.856  		'
00:26:46.856    06:37:03	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:26:46.856  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:46.856  		--rc genhtml_branch_coverage=1
00:26:46.856  		--rc genhtml_function_coverage=1
00:26:46.856  		--rc genhtml_legend=1
00:26:46.856  		--rc geninfo_all_blocks=1
00:26:46.856  		--rc geninfo_unexecuted_blocks=1
00:26:46.856  		
00:26:46.856  		'
00:26:46.856   06:37:03	-- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:26:46.856     06:37:03	-- nvmf/common.sh@7 -- # uname -s
00:26:46.856    06:37:03	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:26:46.856    06:37:03	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:26:46.856    06:37:03	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:26:46.856    06:37:03	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:26:46.856    06:37:03	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:26:46.856    06:37:03	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:26:46.856    06:37:03	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:26:46.856    06:37:03	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:26:46.856    06:37:03	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:26:46.856     06:37:03	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:26:46.856    06:37:03	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e
00:26:46.856    06:37:03	-- nvmf/common.sh@18 -- # NVME_HOSTID=637bef51-f626-4f39-9a90-287f11e9b21e
00:26:46.856    06:37:03	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:26:46.856    06:37:03	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:26:46.856    06:37:03	-- nvmf/common.sh@21 -- # NET_TYPE=virt
00:26:46.856    06:37:03	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:26:46.856     06:37:03	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:26:46.856     06:37:03	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:26:46.856     06:37:03	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:26:46.856      06:37:03	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:46.856      06:37:03	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:46.856      06:37:03	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:46.856      06:37:03	-- paths/export.sh@5 -- # export PATH
00:26:46.857      06:37:03	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:46.857    06:37:03	-- nvmf/common.sh@46 -- # : 0
00:26:46.857    06:37:03	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:26:46.857    06:37:03	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:26:46.857    06:37:03	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:26:46.857    06:37:03	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:26:46.857    06:37:03	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:26:46.857    06:37:03	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:26:46.857    06:37:03	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:26:46.857    06:37:03	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:26:46.857   06:37:03	-- target/abort_qd_sizes.sh@73 -- # nvmftestinit
00:26:46.857   06:37:03	-- nvmf/common.sh@429 -- # '[' -z tcp ']'
00:26:46.857   06:37:03	-- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:26:46.857   06:37:03	-- nvmf/common.sh@436 -- # prepare_net_devs
00:26:46.857   06:37:03	-- nvmf/common.sh@398 -- # local -g is_hw=no
00:26:46.857   06:37:03	-- nvmf/common.sh@400 -- # remove_spdk_ns
00:26:46.857   06:37:03	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:46.857   06:37:03	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null'
00:26:46.857    06:37:03	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:46.857   06:37:03	-- nvmf/common.sh@402 -- # [[ virt != virt ]]
00:26:46.857   06:37:03	-- nvmf/common.sh@404 -- # [[ no == yes ]]
00:26:46.857   06:37:03	-- nvmf/common.sh@411 -- # [[ virt == phy ]]
00:26:46.857   06:37:03	-- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]]
00:26:46.857   06:37:03	-- nvmf/common.sh@419 -- # [[ tcp == tcp ]]
00:26:46.857   06:37:03	-- nvmf/common.sh@420 -- # nvmf_veth_init
00:26:46.857   06:37:03	-- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1
00:26:46.857   06:37:03	-- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:26:46.857   06:37:03	-- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3
00:26:46.857   06:37:03	-- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br
00:26:46.857   06:37:03	-- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:26:46.857   06:37:03	-- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:26:46.857   06:37:03	-- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:26:46.857   06:37:03	-- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:26:46.857   06:37:03	-- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:26:46.857   06:37:03	-- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:26:46.857   06:37:03	-- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:26:46.857   06:37:03	-- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:26:46.857   06:37:03	-- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster
00:26:46.857   06:37:03	-- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster
00:26:46.857  Cannot find device "nvmf_tgt_br"
00:26:46.857   06:37:03	-- nvmf/common.sh@154 -- # true
00:26:46.857   06:37:03	-- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster
00:26:46.857  Cannot find device "nvmf_tgt_br2"
00:26:46.857   06:37:03	-- nvmf/common.sh@155 -- # true
00:26:46.857   06:37:03	-- nvmf/common.sh@156 -- # ip link set nvmf_init_br down
00:26:46.857   06:37:03	-- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down
00:26:46.857  Cannot find device "nvmf_tgt_br"
00:26:46.857   06:37:03	-- nvmf/common.sh@157 -- # true
00:26:46.857   06:37:03	-- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down
00:26:46.857  Cannot find device "nvmf_tgt_br2"
00:26:46.857   06:37:03	-- nvmf/common.sh@158 -- # true
00:26:46.857   06:37:03	-- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge
00:26:46.857   06:37:03	-- nvmf/common.sh@160 -- # ip link delete nvmf_init_if
00:26:46.857   06:37:03	-- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:26:46.857  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:26:46.857   06:37:03	-- nvmf/common.sh@161 -- # true
00:26:46.857   06:37:03	-- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:26:46.857  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:26:46.857   06:37:03	-- nvmf/common.sh@162 -- # true
00:26:46.857   06:37:03	-- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk
00:26:46.857   06:37:03	-- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:26:46.857   06:37:03	-- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:26:46.857   06:37:03	-- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:26:46.857   06:37:03	-- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:26:46.857   06:37:03	-- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:26:46.857   06:37:03	-- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:26:46.857   06:37:03	-- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if
00:26:46.857   06:37:03	-- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2
00:26:46.857   06:37:03	-- nvmf/common.sh@182 -- # ip link set nvmf_init_if up
00:26:46.857   06:37:03	-- nvmf/common.sh@183 -- # ip link set nvmf_init_br up
00:26:46.857   06:37:03	-- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up
00:26:46.857   06:37:03	-- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up
00:26:46.857   06:37:03	-- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:26:46.857   06:37:03	-- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:26:46.857   06:37:03	-- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:26:46.857   06:37:03	-- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge
00:26:46.857   06:37:03	-- nvmf/common.sh@192 -- # ip link set nvmf_br up
00:26:46.857   06:37:03	-- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br
00:26:46.857   06:37:03	-- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br
00:26:46.857   06:37:03	-- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:26:46.857   06:37:03	-- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:26:46.857   06:37:03	-- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:26:46.857   06:37:03	-- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2
00:26:46.857  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:26:46.857  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms
00:26:46.857  
00:26:46.857  --- 10.0.0.2 ping statistics ---
00:26:46.857  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:46.857  rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms
00:26:46.857   06:37:03	-- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3
00:26:46.857  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:26:46.857  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms
00:26:46.857  
00:26:46.857  --- 10.0.0.3 ping statistics ---
00:26:46.857  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:46.857  rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms
00:26:46.857   06:37:03	-- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:26:46.857  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:26:46.857  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms
00:26:46.857  
00:26:46.857  --- 10.0.0.1 ping statistics ---
00:26:46.857  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:46.857  rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms
00:26:46.857   06:37:03	-- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:26:46.857   06:37:03	-- nvmf/common.sh@421 -- # return 0
00:26:46.857   06:37:03	-- nvmf/common.sh@438 -- # '[' iso == iso ']'
00:26:46.857   06:37:03	-- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:26:47.424  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:26:47.683  0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic
00:26:47.683  0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic
00:26:47.683   06:37:04	-- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:26:47.683   06:37:04	-- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]]
00:26:47.683   06:37:04	-- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]]
00:26:47.683   06:37:04	-- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:26:47.683   06:37:04	-- nvmf/common.sh@456 -- # '[' tcp == tcp ']'
00:26:47.683   06:37:04	-- nvmf/common.sh@462 -- # modprobe nvme-tcp
00:26:47.683   06:37:04	-- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf
00:26:47.683   06:37:04	-- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
00:26:47.683   06:37:04	-- common/autotest_common.sh@722 -- # xtrace_disable
00:26:47.683   06:37:04	-- common/autotest_common.sh@10 -- # set +x
00:26:47.683   06:37:04	-- nvmf/common.sh@469 -- # nvmfpid=92723
00:26:47.683   06:37:04	-- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf
00:26:47.683   06:37:04	-- nvmf/common.sh@470 -- # waitforlisten 92723
00:26:47.683   06:37:04	-- common/autotest_common.sh@829 -- # '[' -z 92723 ']'
00:26:47.683   06:37:04	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:26:47.683  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:26:47.683   06:37:04	-- common/autotest_common.sh@834 -- # local max_retries=100
00:26:47.683   06:37:04	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:26:47.683   06:37:04	-- common/autotest_common.sh@838 -- # xtrace_disable
00:26:47.683   06:37:04	-- common/autotest_common.sh@10 -- # set +x
00:26:47.941  [2024-12-16 06:37:04.678808] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:47.941  [2024-12-16 06:37:04.679087] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:26:47.941  [2024-12-16 06:37:04.821951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:26:48.201  [2024-12-16 06:37:04.933893] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:26:48.201  [2024-12-16 06:37:04.934075] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:26:48.201  [2024-12-16 06:37:04.934093] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:26:48.201  [2024-12-16 06:37:04.934105] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:26:48.201  [2024-12-16 06:37:04.934874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:26:48.201  [2024-12-16 06:37:04.935047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:26:48.201  [2024-12-16 06:37:04.936804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:26:48.201  [2024-12-16 06:37:04.936886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:48.775   06:37:05	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:26:48.775   06:37:05	-- common/autotest_common.sh@862 -- # return 0
00:26:48.775   06:37:05	-- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
00:26:48.775   06:37:05	-- common/autotest_common.sh@728 -- # xtrace_disable
00:26:48.775   06:37:05	-- common/autotest_common.sh@10 -- # set +x
00:26:49.044   06:37:05	-- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:26:49.044   06:37:05	-- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT
00:26:49.044   06:37:05	-- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes
00:26:49.044    06:37:05	-- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace
00:26:49.044    06:37:05	-- scripts/common.sh@311 -- # local bdf bdfs
00:26:49.044    06:37:05	-- scripts/common.sh@312 -- # local nvmes
00:26:49.044    06:37:05	-- scripts/common.sh@314 -- # [[ -n '' ]]
00:26:49.044    06:37:05	-- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02))
00:26:49.044     06:37:05	-- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02
00:26:49.044     06:37:05	-- scripts/common.sh@297 -- # local bdf=
00:26:49.044      06:37:05	-- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02
00:26:49.044      06:37:05	-- scripts/common.sh@232 -- # local class
00:26:49.044      06:37:05	-- scripts/common.sh@233 -- # local subclass
00:26:49.044      06:37:05	-- scripts/common.sh@234 -- # local progif
00:26:49.044       06:37:05	-- scripts/common.sh@235 -- # printf %02x 1
00:26:49.044      06:37:05	-- scripts/common.sh@235 -- # class=01
00:26:49.044       06:37:05	-- scripts/common.sh@236 -- # printf %02x 8
00:26:49.044      06:37:05	-- scripts/common.sh@236 -- # subclass=08
00:26:49.044       06:37:05	-- scripts/common.sh@237 -- # printf %02x 2
00:26:49.044      06:37:05	-- scripts/common.sh@237 -- # progif=02
00:26:49.044      06:37:05	-- scripts/common.sh@239 -- # hash lspci
00:26:49.044      06:37:05	-- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']'
00:26:49.044      06:37:05	-- scripts/common.sh@241 -- # lspci -mm -n -D
00:26:49.044      06:37:05	-- scripts/common.sh@242 -- # grep -i -- -p02
00:26:49.044      06:37:05	-- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}'
00:26:49.044      06:37:05	-- scripts/common.sh@244 -- # tr -d '"'
00:26:49.044     06:37:05	-- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@")
00:26:49.044     06:37:05	-- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0
00:26:49.044     06:37:05	-- scripts/common.sh@15 -- # local i
00:26:49.044     06:37:05	-- scripts/common.sh@18 -- # [[    =~  0000:00:06.0  ]]
00:26:49.044     06:37:05	-- scripts/common.sh@22 -- # [[ -z '' ]]
00:26:49.044     06:37:05	-- scripts/common.sh@24 -- # return 0
00:26:49.044     06:37:05	-- scripts/common.sh@301 -- # echo 0000:00:06.0
00:26:49.044     06:37:05	-- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@")
00:26:49.044     06:37:05	-- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0
00:26:49.044     06:37:05	-- scripts/common.sh@15 -- # local i
00:26:49.044     06:37:05	-- scripts/common.sh@18 -- # [[    =~  0000:00:07.0  ]]
00:26:49.044     06:37:05	-- scripts/common.sh@22 -- # [[ -z '' ]]
00:26:49.044     06:37:05	-- scripts/common.sh@24 -- # return 0
00:26:49.044     06:37:05	-- scripts/common.sh@301 -- # echo 0000:00:07.0
00:26:49.044    06:37:05	-- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}"
00:26:49.044    06:37:05	-- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]]
00:26:49.044     06:37:05	-- scripts/common.sh@322 -- # uname -s
00:26:49.044    06:37:05	-- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]]
00:26:49.044    06:37:05	-- scripts/common.sh@325 -- # bdfs+=("$bdf")
00:26:49.044    06:37:05	-- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}"
00:26:49.044    06:37:05	-- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]]
00:26:49.044     06:37:05	-- scripts/common.sh@322 -- # uname -s
00:26:49.044    06:37:05	-- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]]
00:26:49.044    06:37:05	-- scripts/common.sh@325 -- # bdfs+=("$bdf")
00:26:49.044    06:37:05	-- scripts/common.sh@327 -- # (( 2 ))
00:26:49.044    06:37:05	-- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0
00:26:49.044   06:37:05	-- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 ))
00:26:49.044   06:37:05	-- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0
00:26:49.044   06:37:05	-- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target
00:26:49.044   06:37:05	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:49.044   06:37:05	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:49.044   06:37:05	-- common/autotest_common.sh@10 -- # set +x
00:26:49.044  ************************************
00:26:49.044  START TEST spdk_target_abort
00:26:49.044  ************************************
00:26:49.044   06:37:05	-- common/autotest_common.sh@1114 -- # spdk_target
00:26:49.044   06:37:05	-- target/abort_qd_sizes.sh@43 -- # local name=spdk_target
00:26:49.044   06:37:05	-- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target
00:26:49.044   06:37:05	-- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target
00:26:49.044   06:37:05	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:49.044   06:37:05	-- common/autotest_common.sh@10 -- # set +x
00:26:49.044  spdk_targetn1
00:26:49.044   06:37:05	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:49.044   06:37:05	-- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:26:49.044   06:37:05	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:49.044   06:37:05	-- common/autotest_common.sh@10 -- # set +x
00:26:49.044  [2024-12-16 06:37:05.911863] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:26:49.044   06:37:05	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:49.044   06:37:05	-- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME
00:26:49.044   06:37:05	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:49.044   06:37:05	-- common/autotest_common.sh@10 -- # set +x
00:26:49.044   06:37:05	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:49.044   06:37:05	-- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1
00:26:49.044   06:37:05	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:49.044   06:37:05	-- common/autotest_common.sh@10 -- # set +x
00:26:49.044   06:37:05	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:49.045   06:37:05	-- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420
00:26:49.045   06:37:05	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:49.045   06:37:05	-- common/autotest_common.sh@10 -- # set +x
00:26:49.045  [2024-12-16 06:37:05.940074] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:26:49.045   06:37:05	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:49.045   06:37:05	-- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target
00:26:49.045   06:37:05	-- target/abort_qd_sizes.sh@17 -- # local trtype=tcp
00:26:49.045   06:37:05	-- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4
00:26:49.045   06:37:05	-- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2
00:26:49.045   06:37:05	-- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420
00:26:49.045   06:37:05	-- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target
00:26:49.045   06:37:05	-- target/abort_qd_sizes.sh@23 -- # local qds qd
00:26:49.045   06:37:05	-- target/abort_qd_sizes.sh@24 -- # local target r
00:26:49.045   06:37:05	-- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64)
00:26:49.045   06:37:05	-- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:26:49.045   06:37:05	-- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp
00:26:49.045   06:37:05	-- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:26:49.045   06:37:05	-- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4'
00:26:49.045   06:37:05	-- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:26:49.045   06:37:05	-- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2'
00:26:49.045   06:37:05	-- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:26:49.045   06:37:05	-- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:26:49.045   06:37:05	-- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:26:49.045   06:37:05	-- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target'
00:26:49.045   06:37:05	-- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}"
00:26:49.045   06:37:05	-- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target'
00:26:52.347  Initializing NVMe Controllers
00:26:52.347  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target
00:26:52.347  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0
00:26:52.347  Initialization complete. Launching workers.
00:26:52.347  NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 11148, failed: 0
00:26:52.347  CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1181, failed to submit 9967
00:26:52.347  	 success 762, unsuccess 419, failed 0
00:26:52.347   06:37:09	-- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}"
00:26:52.347   06:37:09	-- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target'
00:26:55.634  [2024-12-16 06:37:12.394526] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1752480 is same with the state(5) to be set
00:26:55.634  [2024-12-16 06:37:12.394591] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1752480 is same with the state(5) to be set
00:26:55.634  [2024-12-16 06:37:12.394602] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1752480 is same with the state(5) to be set
00:26:55.634  [2024-12-16 06:37:12.394610] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1752480 is same with the state(5) to be set
00:26:55.634  [2024-12-16 06:37:12.394617] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1752480 is same with the state(5) to be set
00:26:55.634  [2024-12-16 06:37:12.394648] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1752480 is same with the state(5) to be set
00:26:55.634  [2024-12-16 06:37:12.394657] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1752480 is same with the state(5) to be set
00:26:55.634  [2024-12-16 06:37:12.394665] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1752480 is same with the state(5) to be set
00:26:55.634  [2024-12-16 06:37:12.394673] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1752480 is same with the state(5) to be set
00:26:55.634  Initializing NVMe Controllers
00:26:55.634  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target
00:26:55.634  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0
00:26:55.634  Initialization complete. Launching workers.
00:26:55.634  NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5906, failed: 0
00:26:55.634  CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1218, failed to submit 4688
00:26:55.634  	 success 244, unsuccess 974, failed 0
00:26:55.634   06:37:12	-- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}"
00:26:55.634   06:37:12	-- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target'
00:26:58.920  Initializing NVMe Controllers
00:26:58.920  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target
00:26:58.920  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0
00:26:58.920  Initialization complete. Launching workers.
00:26:58.920  NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 33494, failed: 0
00:26:58.920  CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2737, failed to submit 30757
00:26:58.920  	 success 531, unsuccess 2206, failed 0
00:26:58.920   06:37:15	-- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target
00:26:58.920   06:37:15	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:58.920   06:37:15	-- common/autotest_common.sh@10 -- # set +x
00:26:58.920   06:37:15	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:58.920   06:37:15	-- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target
00:26:58.920   06:37:15	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:58.920   06:37:15	-- common/autotest_common.sh@10 -- # set +x
00:26:59.179   06:37:16	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:59.179   06:37:16	-- target/abort_qd_sizes.sh@62 -- # killprocess 92723
00:26:59.179   06:37:16	-- common/autotest_common.sh@936 -- # '[' -z 92723 ']'
00:26:59.179   06:37:16	-- common/autotest_common.sh@940 -- # kill -0 92723
00:26:59.179    06:37:16	-- common/autotest_common.sh@941 -- # uname
00:26:59.179   06:37:16	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:26:59.179    06:37:16	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92723
00:26:59.179  killing process with pid 92723
00:26:59.179   06:37:16	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:26:59.179   06:37:16	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:26:59.179   06:37:16	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 92723'
00:26:59.179   06:37:16	-- common/autotest_common.sh@955 -- # kill 92723
00:26:59.179   06:37:16	-- common/autotest_common.sh@960 -- # wait 92723
00:26:59.437  ************************************
00:26:59.437  END TEST spdk_target_abort
00:26:59.437  ************************************
00:26:59.437  
00:26:59.438  real	0m10.566s
00:26:59.438  user	0m43.278s
00:26:59.438  sys	0m1.763s
00:26:59.438   06:37:16	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:59.438   06:37:16	-- common/autotest_common.sh@10 -- # set +x
00:26:59.695   06:37:16	-- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target
00:26:59.695   06:37:16	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:59.695   06:37:16	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:59.695   06:37:16	-- common/autotest_common.sh@10 -- # set +x
00:26:59.695  ************************************
00:26:59.695  START TEST kernel_target_abort
00:26:59.695  ************************************
00:26:59.695   06:37:16	-- common/autotest_common.sh@1114 -- # kernel_target
00:26:59.695   06:37:16	-- target/abort_qd_sizes.sh@66 -- # local name=kernel_target
00:26:59.695   06:37:16	-- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target
00:26:59.695   06:37:16	-- nvmf/common.sh@621 -- # kernel_name=kernel_target
00:26:59.695   06:37:16	-- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet
00:26:59.695   06:37:16	-- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target
00:26:59.695   06:37:16	-- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1
00:26:59.695   06:37:16	-- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1
00:26:59.696   06:37:16	-- nvmf/common.sh@627 -- # local block nvme
00:26:59.696   06:37:16	-- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]]
00:26:59.696   06:37:16	-- nvmf/common.sh@630 -- # modprobe nvmet
00:26:59.696   06:37:16	-- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]]
00:26:59.696   06:37:16	-- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:26:59.954  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:26:59.954  Waiting for block devices as requested
00:26:59.954  0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme
00:27:00.213  0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme
00:27:00.213   06:37:17	-- nvmf/common.sh@638 -- # for block in /sys/block/nvme*
00:27:00.213   06:37:17	-- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]]
00:27:00.213   06:37:17	-- nvmf/common.sh@640 -- # block_in_use nvme0n1
00:27:00.213   06:37:17	-- scripts/common.sh@380 -- # local block=nvme0n1 pt
00:27:00.213   06:37:17	-- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1
00:27:00.213  No valid GPT data, bailing
00:27:00.213    06:37:17	-- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:27:00.213   06:37:17	-- scripts/common.sh@393 -- # pt=
00:27:00.213   06:37:17	-- scripts/common.sh@394 -- # return 1
00:27:00.213   06:37:17	-- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1
00:27:00.213   06:37:17	-- nvmf/common.sh@638 -- # for block in /sys/block/nvme*
00:27:00.213   06:37:17	-- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]]
00:27:00.213   06:37:17	-- nvmf/common.sh@640 -- # block_in_use nvme1n1
00:27:00.213   06:37:17	-- scripts/common.sh@380 -- # local block=nvme1n1 pt
00:27:00.213   06:37:17	-- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1
00:27:00.213  No valid GPT data, bailing
00:27:00.213    06:37:17	-- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1
00:27:00.471   06:37:17	-- scripts/common.sh@393 -- # pt=
00:27:00.471   06:37:17	-- scripts/common.sh@394 -- # return 1
00:27:00.471   06:37:17	-- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1
00:27:00.471   06:37:17	-- nvmf/common.sh@638 -- # for block in /sys/block/nvme*
00:27:00.471   06:37:17	-- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]]
00:27:00.471   06:37:17	-- nvmf/common.sh@640 -- # block_in_use nvme1n2
00:27:00.471   06:37:17	-- scripts/common.sh@380 -- # local block=nvme1n2 pt
00:27:00.471   06:37:17	-- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2
00:27:00.471  No valid GPT data, bailing
00:27:00.471    06:37:17	-- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2
00:27:00.471   06:37:17	-- scripts/common.sh@393 -- # pt=
00:27:00.471   06:37:17	-- scripts/common.sh@394 -- # return 1
00:27:00.471   06:37:17	-- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2
00:27:00.471   06:37:17	-- nvmf/common.sh@638 -- # for block in /sys/block/nvme*
00:27:00.471   06:37:17	-- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]]
00:27:00.472   06:37:17	-- nvmf/common.sh@640 -- # block_in_use nvme1n3
00:27:00.472   06:37:17	-- scripts/common.sh@380 -- # local block=nvme1n3 pt
00:27:00.472   06:37:17	-- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3
00:27:00.472  No valid GPT data, bailing
00:27:00.472    06:37:17	-- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3
00:27:00.472   06:37:17	-- scripts/common.sh@393 -- # pt=
00:27:00.472   06:37:17	-- scripts/common.sh@394 -- # return 1
00:27:00.472   06:37:17	-- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3
00:27:00.472   06:37:17	-- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]]
00:27:00.472   06:37:17	-- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target
00:27:00.472   06:37:17	-- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1
00:27:00.472   06:37:17	-- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1
00:27:00.472   06:37:17	-- nvmf/common.sh@652 -- # echo SPDK-kernel_target
00:27:00.472   06:37:17	-- nvmf/common.sh@654 -- # echo 1
00:27:00.472   06:37:17	-- nvmf/common.sh@655 -- # echo /dev/nvme1n3
00:27:00.472   06:37:17	-- nvmf/common.sh@656 -- # echo 1
00:27:00.472   06:37:17	-- nvmf/common.sh@662 -- # echo 10.0.0.1
00:27:00.472   06:37:17	-- nvmf/common.sh@663 -- # echo tcp
00:27:00.472   06:37:17	-- nvmf/common.sh@664 -- # echo 4420
00:27:00.472   06:37:17	-- nvmf/common.sh@665 -- # echo ipv4
00:27:00.472   06:37:17	-- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/
00:27:00.472   06:37:17	-- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637bef51-f626-4f39-9a90-287f11e9b21e --hostid=637bef51-f626-4f39-9a90-287f11e9b21e -a 10.0.0.1 -t tcp -s 4420
00:27:00.472  
00:27:00.472  Discovery Log Number of Records 2, Generation counter 2
00:27:00.472  =====Discovery Log Entry 0======
00:27:00.472  trtype:  tcp
00:27:00.472  adrfam:  ipv4
00:27:00.472  subtype: current discovery subsystem
00:27:00.472  treq:    not specified, sq flow control disable supported
00:27:00.472  portid:  1
00:27:00.472  trsvcid: 4420
00:27:00.472  subnqn:  nqn.2014-08.org.nvmexpress.discovery
00:27:00.472  traddr:  10.0.0.1
00:27:00.472  eflags:  none
00:27:00.472  sectype: none
00:27:00.472  =====Discovery Log Entry 1======
00:27:00.472  trtype:  tcp
00:27:00.472  adrfam:  ipv4
00:27:00.472  subtype: nvme subsystem
00:27:00.472  treq:    not specified, sq flow control disable supported
00:27:00.472  portid:  1
00:27:00.472  trsvcid: 4420
00:27:00.472  subnqn:  kernel_target
00:27:00.472  traddr:  10.0.0.1
00:27:00.472  eflags:  none
00:27:00.472  sectype: none
00:27:00.472   06:37:17	-- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target
00:27:00.472   06:37:17	-- target/abort_qd_sizes.sh@17 -- # local trtype=tcp
00:27:00.472   06:37:17	-- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4
00:27:00.472   06:37:17	-- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1
00:27:00.472   06:37:17	-- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420
00:27:00.472   06:37:17	-- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target
00:27:00.472   06:37:17	-- target/abort_qd_sizes.sh@23 -- # local qds qd
00:27:00.472   06:37:17	-- target/abort_qd_sizes.sh@24 -- # local target r
00:27:00.472   06:37:17	-- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64)
00:27:00.472   06:37:17	-- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:27:00.472   06:37:17	-- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp
00:27:00.472   06:37:17	-- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:27:00.472   06:37:17	-- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4'
00:27:00.472   06:37:17	-- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:27:00.472   06:37:17	-- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1'
00:27:00.472   06:37:17	-- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:27:00.472   06:37:17	-- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420'
00:27:00.472   06:37:17	-- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:27:00.472   06:37:17	-- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target'
00:27:00.472   06:37:17	-- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}"
00:27:00.472   06:37:17	-- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target'
00:27:03.759  Initializing NVMe Controllers
00:27:03.759  Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target
00:27:03.759  Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0
00:27:03.759  Initialization complete. Launching workers.
00:27:03.759  NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 30525, failed: 0
00:27:03.759  CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 30525, failed to submit 0
00:27:03.759  	 success 0, unsuccess 30525, failed 0
00:27:03.759   06:37:20	-- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}"
00:27:03.759   06:37:20	-- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target'
00:27:07.047  Initializing NVMe Controllers
00:27:07.047  Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target
00:27:07.047  Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0
00:27:07.047  Initialization complete. Launching workers.
00:27:07.047  NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 66611, failed: 0
00:27:07.047  CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 26957, failed to submit 39654
00:27:07.047  	 success 0, unsuccess 26957, failed 0
00:27:07.047   06:37:23	-- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}"
00:27:07.047   06:37:23	-- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target'
00:27:10.338  Initializing NVMe Controllers
00:27:10.338  Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target
00:27:10.338  Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0
00:27:10.338  Initialization complete. Launching workers.
00:27:10.338  NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 76078, failed: 0
00:27:10.338  CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 18974, failed to submit 57104
00:27:10.338  	 success 0, unsuccess 18974, failed 0
00:27:10.338   06:37:26	-- target/abort_qd_sizes.sh@70 -- # clean_kernel_target
00:27:10.338   06:37:26	-- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]]
00:27:10.338   06:37:26	-- nvmf/common.sh@677 -- # echo 0
00:27:10.338   06:37:26	-- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target
00:27:10.338   06:37:26	-- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1
00:27:10.338   06:37:26	-- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1
00:27:10.338   06:37:26	-- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target
00:27:10.338   06:37:26	-- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*)
00:27:10.338   06:37:26	-- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet
00:27:10.338  
00:27:10.338  real	0m10.545s
00:27:10.338  user	0m5.087s
00:27:10.338  sys	0m2.741s
00:27:10.338   06:37:26	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:10.338   06:37:26	-- common/autotest_common.sh@10 -- # set +x
00:27:10.338  ************************************
00:27:10.338  END TEST kernel_target_abort
00:27:10.338  ************************************
00:27:10.338   06:37:27	-- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT
00:27:10.338   06:37:27	-- target/abort_qd_sizes.sh@87 -- # nvmftestfini
00:27:10.338   06:37:27	-- nvmf/common.sh@476 -- # nvmfcleanup
00:27:10.338   06:37:27	-- nvmf/common.sh@116 -- # sync
00:27:10.338   06:37:27	-- nvmf/common.sh@118 -- # '[' tcp == tcp ']'
00:27:10.338   06:37:27	-- nvmf/common.sh@119 -- # set +e
00:27:10.338   06:37:27	-- nvmf/common.sh@120 -- # for i in {1..20}
00:27:10.338   06:37:27	-- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp
00:27:10.338  rmmod nvme_tcp
00:27:10.338  rmmod nvme_fabrics
00:27:10.338  rmmod nvme_keyring
00:27:10.338   06:37:27	-- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
00:27:10.338   06:37:27	-- nvmf/common.sh@123 -- # set -e
00:27:10.338   06:37:27	-- nvmf/common.sh@124 -- # return 0
00:27:10.338   06:37:27	-- nvmf/common.sh@477 -- # '[' -n 92723 ']'
00:27:10.338   06:37:27	-- nvmf/common.sh@478 -- # killprocess 92723
00:27:10.338   06:37:27	-- common/autotest_common.sh@936 -- # '[' -z 92723 ']'
00:27:10.338   06:37:27	-- common/autotest_common.sh@940 -- # kill -0 92723
00:27:10.338  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (92723) - No such process
00:27:10.338  Process with pid 92723 is not found
00:27:10.338   06:37:27	-- common/autotest_common.sh@963 -- # echo 'Process with pid 92723 is not found'
00:27:10.338   06:37:27	-- nvmf/common.sh@480 -- # '[' iso == iso ']'
00:27:10.338   06:37:27	-- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:27:10.906  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:27:11.165  0000:00:06.0 (1b36 0010): Already using the nvme driver
00:27:11.165  0000:00:07.0 (1b36 0010): Already using the nvme driver
00:27:11.165   06:37:27	-- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]]
00:27:11.165   06:37:27	-- nvmf/common.sh@484 -- # nvmf_tcp_fini
00:27:11.165   06:37:27	-- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]]
00:27:11.165   06:37:27	-- nvmf/common.sh@277 -- # remove_spdk_ns
00:27:11.165   06:37:27	-- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:27:11.165   06:37:27	-- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null'
00:27:11.165    06:37:27	-- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:27:11.165   06:37:27	-- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if
00:27:11.165  ************************************
00:27:11.165  END TEST nvmf_abort_qd_sizes
00:27:11.165  ************************************
00:27:11.165  
00:27:11.165  real	0m24.888s
00:27:11.165  user	0m49.896s
00:27:11.165  sys	0m5.931s
00:27:11.165   06:37:27	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:11.165   06:37:27	-- common/autotest_common.sh@10 -- # set +x
00:27:11.165   06:37:28	-- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']'
00:27:11.165   06:37:28	-- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']'
00:27:11.165   06:37:28	-- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']'
00:27:11.165   06:37:28	-- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']'
00:27:11.165   06:37:28	-- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']'
00:27:11.165   06:37:28	-- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']'
00:27:11.165   06:37:28	-- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']'
00:27:11.165   06:37:28	-- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']'
00:27:11.165   06:37:28	-- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']'
00:27:11.165   06:37:28	-- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']'
00:27:11.165   06:37:28	-- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']'
00:27:11.165   06:37:28	-- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]]
00:27:11.165   06:37:28	-- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]]
00:27:11.165   06:37:28	-- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]]
00:27:11.165   06:37:28	-- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]]
00:27:11.165   06:37:28	-- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT
00:27:11.165   06:37:28	-- spdk/autotest.sh@372 -- # timing_enter post_cleanup
00:27:11.165   06:37:28	-- common/autotest_common.sh@722 -- # xtrace_disable
00:27:11.165   06:37:28	-- common/autotest_common.sh@10 -- # set +x
00:27:11.165   06:37:28	-- spdk/autotest.sh@373 -- # autotest_cleanup
00:27:11.165   06:37:28	-- common/autotest_common.sh@1381 -- # local autotest_es=0
00:27:11.165   06:37:28	-- common/autotest_common.sh@1382 -- # xtrace_disable
00:27:11.165   06:37:28	-- common/autotest_common.sh@10 -- # set +x
00:27:13.066  INFO: APP EXITING
00:27:13.066  INFO: killing all VMs
00:27:13.066  INFO: killing vhost app
00:27:13.066  INFO: EXIT DONE
00:27:13.634  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:27:13.634  0000:00:06.0 (1b36 0010): Already using the nvme driver
00:27:13.892  0000:00:07.0 (1b36 0010): Already using the nvme driver
00:27:14.460  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:27:14.460  Cleaning
00:27:14.460  Removing:    /var/run/dpdk/spdk0/config
00:27:14.460  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0
00:27:14.460  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1
00:27:14.460  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2
00:27:14.460  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3
00:27:14.460  Removing:    /var/run/dpdk/spdk0/fbarray_memzone
00:27:14.460  Removing:    /var/run/dpdk/spdk0/hugepage_info
00:27:14.460  Removing:    /var/run/dpdk/spdk1/config
00:27:14.460  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0
00:27:14.460  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1
00:27:14.460  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2
00:27:14.460  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3
00:27:14.460  Removing:    /var/run/dpdk/spdk1/fbarray_memzone
00:27:14.460  Removing:    /var/run/dpdk/spdk1/hugepage_info
00:27:14.460  Removing:    /var/run/dpdk/spdk2/config
00:27:14.460  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0
00:27:14.460  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1
00:27:14.460  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2
00:27:14.460  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3
00:27:14.460  Removing:    /var/run/dpdk/spdk2/fbarray_memzone
00:27:14.460  Removing:    /var/run/dpdk/spdk2/hugepage_info
00:27:14.460  Removing:    /var/run/dpdk/spdk3/config
00:27:14.460  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0
00:27:14.460  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1
00:27:14.719  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2
00:27:14.719  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3
00:27:14.719  Removing:    /var/run/dpdk/spdk3/fbarray_memzone
00:27:14.719  Removing:    /var/run/dpdk/spdk3/hugepage_info
00:27:14.719  Removing:    /var/run/dpdk/spdk4/config
00:27:14.719  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0
00:27:14.719  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1
00:27:14.719  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2
00:27:14.719  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3
00:27:14.719  Removing:    /var/run/dpdk/spdk4/fbarray_memzone
00:27:14.719  Removing:    /var/run/dpdk/spdk4/hugepage_info
00:27:14.719  Removing:    /dev/shm/nvmf_trace.0
00:27:14.719  Removing:    /dev/shm/spdk_tgt_trace.pid55495
00:27:14.719  Removing:    /var/run/dpdk/spdk0
00:27:14.719  Removing:    /var/run/dpdk/spdk1
00:27:14.719  Removing:    /var/run/dpdk/spdk2
00:27:14.719  Removing:    /var/run/dpdk/spdk3
00:27:14.719  Removing:    /var/run/dpdk/spdk4
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid55337
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid55495
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid55812
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid56091
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid56264
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid56352
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid56451
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid56553
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid56586
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid56616
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid56691
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid56803
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid57441
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid57505
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid57574
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid57602
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid57677
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid57705
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid57784
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid57812
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid57864
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid57894
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid57940
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid57970
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58124
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58165
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58241
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58316
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58335
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58399
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58413
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58453
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58467
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58501
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58521
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58550
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58575
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58604
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58629
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58658
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58679
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58712
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58732
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58766
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58786
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58820
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58834
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58869
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58888
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58923
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58942
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58977
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid58991
00:27:14.719  Removing:    /var/run/dpdk/spdk_pid59031
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid59045
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid59079
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid59099
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid59128
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid59153
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid59182
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid59207
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid59236
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid59253
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid59298
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid59315
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid59358
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid59372
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid59407
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid59426
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid59462
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid59536
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid59653
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid60088
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid67046
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid67401
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid69821
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid70207
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid70483
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid70530
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid70797
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid70800
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid70860
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid70917
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid70977
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid71015
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid71023
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid71048
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid71086
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid71094
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid71152
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid71209
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid71271
00:27:14.978  Removing:    /var/run/dpdk/spdk_pid71310
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid71317
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid71343
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid71638
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid71797
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid72061
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid72115
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid72508
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid73041
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid73471
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid74443
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid75435
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid75552
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid75620
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid77105
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid77344
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid77794
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid77904
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid78057
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid78097
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid78143
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid78187
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid78346
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid78499
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid78767
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid78884
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid79306
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid79695
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid79698
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid81957
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid82271
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid82786
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid82788
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid83136
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid83154
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid83169
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid83200
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid83207
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid83350
00:27:14.979  Removing:    /var/run/dpdk/spdk_pid83358
00:27:15.237  Removing:    /var/run/dpdk/spdk_pid83466
00:27:15.237  Removing:    /var/run/dpdk/spdk_pid83468
00:27:15.237  Removing:    /var/run/dpdk/spdk_pid83571
00:27:15.237  Removing:    /var/run/dpdk/spdk_pid83577
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid84056
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid84099
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid84256
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid84372
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid84773
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid85026
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid85520
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid86086
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid86563
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid86653
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid86739
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid86836
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid86994
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid87080
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid87176
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid87261
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid87612
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid88320
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid89684
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid89885
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid90176
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid90492
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid91050
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid91056
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid91425
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid91585
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid91747
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid91844
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid91999
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid92114
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid92792
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid92826
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid92863
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid93107
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid93141
00:27:15.238  Removing:    /var/run/dpdk/spdk_pid93181
00:27:15.238  Clean
00:27:15.238  killing process with pid 49748
00:27:15.238  killing process with pid 49751
00:27:15.496   06:37:32	-- common/autotest_common.sh@1446 -- # return 0
00:27:15.496   06:37:32	-- spdk/autotest.sh@374 -- # timing_exit post_cleanup
00:27:15.496   06:37:32	-- common/autotest_common.sh@728 -- # xtrace_disable
00:27:15.496   06:37:32	-- common/autotest_common.sh@10 -- # set +x
00:27:15.496   06:37:32	-- spdk/autotest.sh@376 -- # timing_exit autotest
00:27:15.496   06:37:32	-- common/autotest_common.sh@728 -- # xtrace_disable
00:27:15.496   06:37:32	-- common/autotest_common.sh@10 -- # set +x
00:27:15.496   06:37:32	-- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt
00:27:15.496   06:37:32	-- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]]
00:27:15.496   06:37:32	-- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log
00:27:15.496   06:37:32	-- spdk/autotest.sh@381 -- # [[ y == y ]]
00:27:15.496    06:37:32	-- spdk/autotest.sh@383 -- # hostname
00:27:15.496   06:37:32	-- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info
00:27:15.754  geninfo: WARNING: invalid characters removed from testname!
00:27:37.719   06:37:53	-- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:27:39.622   06:37:56	-- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:27:42.156   06:37:58	-- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:27:44.690   06:38:01	-- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:27:47.224   06:38:03	-- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:27:49.756   06:38:06	-- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:27:51.659   06:38:08	-- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR
00:27:51.659     06:38:08	-- common/autotest_common.sh@1689 -- $ [[ y == y ]]
00:27:51.659      06:38:08	-- common/autotest_common.sh@1690 -- $ lcov --version
00:27:51.659      06:38:08	-- common/autotest_common.sh@1690 -- $ awk '{print $NF}'
00:27:51.659     06:38:08	-- common/autotest_common.sh@1690 -- $ lt 1.15 2
00:27:51.659     06:38:08	-- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2
00:27:51.659     06:38:08	-- scripts/common.sh@332 -- $ local ver1 ver1_l
00:27:51.659     06:38:08	-- scripts/common.sh@333 -- $ local ver2 ver2_l
00:27:51.659     06:38:08	-- scripts/common.sh@335 -- $ IFS=.-:
00:27:51.659     06:38:08	-- scripts/common.sh@335 -- $ read -ra ver1
00:27:51.659     06:38:08	-- scripts/common.sh@336 -- $ IFS=.-:
00:27:51.659     06:38:08	-- scripts/common.sh@336 -- $ read -ra ver2
00:27:51.659     06:38:08	-- scripts/common.sh@337 -- $ local 'op=<'
00:27:51.659     06:38:08	-- scripts/common.sh@339 -- $ ver1_l=2
00:27:51.659     06:38:08	-- scripts/common.sh@340 -- $ ver2_l=1
00:27:51.659     06:38:08	-- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v
00:27:51.659     06:38:08	-- scripts/common.sh@343 -- $ case "$op" in
00:27:51.659     06:38:08	-- scripts/common.sh@344 -- $ : 1
00:27:51.659     06:38:08	-- scripts/common.sh@363 -- $ (( v = 0 ))
00:27:51.659     06:38:08	-- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:27:51.659      06:38:08	-- scripts/common.sh@364 -- $ decimal 1
00:27:51.659      06:38:08	-- scripts/common.sh@352 -- $ local d=1
00:27:51.659      06:38:08	-- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]]
00:27:51.659      06:38:08	-- scripts/common.sh@354 -- $ echo 1
00:27:51.659     06:38:08	-- scripts/common.sh@364 -- $ ver1[v]=1
00:27:51.659      06:38:08	-- scripts/common.sh@365 -- $ decimal 2
00:27:51.659      06:38:08	-- scripts/common.sh@352 -- $ local d=2
00:27:51.659      06:38:08	-- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]]
00:27:51.659      06:38:08	-- scripts/common.sh@354 -- $ echo 2
00:27:51.659     06:38:08	-- scripts/common.sh@365 -- $ ver2[v]=2
00:27:51.659     06:38:08	-- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] ))
00:27:51.659     06:38:08	-- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] ))
00:27:51.659     06:38:08	-- scripts/common.sh@367 -- $ return 0
00:27:51.659     06:38:08	-- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:27:51.659     06:38:08	-- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS=
00:27:51.659  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:51.659  		--rc genhtml_branch_coverage=1
00:27:51.659  		--rc genhtml_function_coverage=1
00:27:51.659  		--rc genhtml_legend=1
00:27:51.659  		--rc geninfo_all_blocks=1
00:27:51.659  		--rc geninfo_unexecuted_blocks=1
00:27:51.659  		
00:27:51.659  		'
00:27:51.659     06:38:08	-- common/autotest_common.sh@1703 -- $ LCOV_OPTS='
00:27:51.659  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:51.659  		--rc genhtml_branch_coverage=1
00:27:51.659  		--rc genhtml_function_coverage=1
00:27:51.659  		--rc genhtml_legend=1
00:27:51.659  		--rc geninfo_all_blocks=1
00:27:51.659  		--rc geninfo_unexecuted_blocks=1
00:27:51.659  		
00:27:51.659  		'
00:27:51.659     06:38:08	-- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 
00:27:51.659  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:51.659  		--rc genhtml_branch_coverage=1
00:27:51.659  		--rc genhtml_function_coverage=1
00:27:51.659  		--rc genhtml_legend=1
00:27:51.659  		--rc geninfo_all_blocks=1
00:27:51.659  		--rc geninfo_unexecuted_blocks=1
00:27:51.659  		
00:27:51.659  		'
00:27:51.659     06:38:08	-- common/autotest_common.sh@1704 -- $ LCOV='lcov 
00:27:51.659  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:51.659  		--rc genhtml_branch_coverage=1
00:27:51.659  		--rc genhtml_function_coverage=1
00:27:51.659  		--rc genhtml_legend=1
00:27:51.659  		--rc geninfo_all_blocks=1
00:27:51.659  		--rc geninfo_unexecuted_blocks=1
00:27:51.659  		
00:27:51.659  		'
00:27:51.659    06:38:08	-- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:27:51.659     06:38:08	-- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]]
00:27:51.659     06:38:08	-- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:27:51.659     06:38:08	-- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:27:51.659      06:38:08	-- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:51.659      06:38:08	-- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:51.659      06:38:08	-- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:51.659      06:38:08	-- paths/export.sh@5 -- $ export PATH
00:27:51.659      06:38:08	-- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:51.659    06:38:08	-- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output
00:27:51.659      06:38:08	-- common/autobuild_common.sh@440 -- $ date +%s
00:27:51.659     06:38:08	-- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734331088.XXXXXX
00:27:51.659    06:38:08	-- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734331088.vvRkSZ
00:27:51.659    06:38:08	-- common/autobuild_common.sh@442 -- $ [[ -n '' ]]
00:27:51.659    06:38:08	-- common/autobuild_common.sh@446 -- $ '[' -n '' ']'
00:27:51.659    06:38:08	-- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/'
00:27:51.659    06:38:08	-- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp'
00:27:51.660    06:38:08	-- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs'
00:27:51.660     06:38:08	-- common/autobuild_common.sh@456 -- $ get_config_params
00:27:51.660     06:38:08	-- common/autotest_common.sh@397 -- $ xtrace_disable
00:27:51.660     06:38:08	-- common/autotest_common.sh@10 -- $ set +x
00:27:51.660    06:38:08	-- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang'
00:27:51.660   06:38:08	-- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10
00:27:51.660   06:38:08	-- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk
00:27:51.660   06:38:08	-- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]]
00:27:51.660   06:38:08	-- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]]
00:27:51.660   06:38:08	-- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]]
00:27:51.660   06:38:08	-- spdk/autopackage.sh@19 -- $ timing_finish
00:27:51.660   06:38:08	-- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl
00:27:51.660   06:38:08	-- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']'
00:27:51.660   06:38:08	-- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt
00:27:51.918   06:38:08	-- spdk/autopackage.sh@20 -- $ exit 0
00:27:51.918  + [[ -n 5233 ]]
00:27:51.918  + sudo kill 5233
00:27:51.927  [Pipeline] }
00:27:51.942  [Pipeline] // timeout
00:27:51.947  [Pipeline] }
00:27:51.961  [Pipeline] // stage
00:27:51.967  [Pipeline] }
00:27:51.982  [Pipeline] // catchError
00:27:51.991  [Pipeline] stage
00:27:51.993  [Pipeline] { (Stop VM)
00:27:52.003  [Pipeline] sh
00:27:52.283  + vagrant halt
00:27:55.569  ==> default: Halting domain...
00:28:02.144  [Pipeline] sh
00:28:02.424  + vagrant destroy -f
00:28:04.956  ==> default: Removing domain...
00:28:05.226  [Pipeline] sh
00:28:05.507  + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output
00:28:05.516  [Pipeline] }
00:28:05.531  [Pipeline] // stage
00:28:05.537  [Pipeline] }
00:28:05.551  [Pipeline] // dir
00:28:05.556  [Pipeline] }
00:28:05.570  [Pipeline] // wrap
00:28:05.577  [Pipeline] }
00:28:05.590  [Pipeline] // catchError
00:28:05.599  [Pipeline] stage
00:28:05.601  [Pipeline] { (Epilogue)
00:28:05.614  [Pipeline] sh
00:28:05.969  + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh
00:28:11.248  [Pipeline] catchError
00:28:11.250  [Pipeline] {
00:28:11.263  [Pipeline] sh
00:28:11.544  + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh
00:28:11.803  Artifacts sizes are good
00:28:11.812  [Pipeline] }
00:28:11.826  [Pipeline] // catchError
00:28:11.837  [Pipeline] archiveArtifacts
00:28:11.844  Archiving artifacts
00:28:11.966  [Pipeline] cleanWs
00:28:11.977  [WS-CLEANUP] Deleting project workspace...
00:28:11.977  [WS-CLEANUP] Deferred wipeout is used...
00:28:11.984  [WS-CLEANUP] done
00:28:11.986  [Pipeline] }
00:28:12.001  [Pipeline] // stage
00:28:12.006  [Pipeline] }
00:28:12.020  [Pipeline] // node
00:28:12.025  [Pipeline] End of Pipeline
00:28:12.070  Finished: SUCCESS